Raindrops Keep Falling on our Sensors

The University of Chicago Weather Service has purchased a new rainfall sensor. This sensor takes measurements every few seconds, which can be handed off to a computer for further analysis. Each measurement is a non-negative integer representing how many millimeters of rainfall have been recorded since the previous measurement.

Given this information, we would like to compute the average rainfall, which is simply the average of the non-negative integers produced by the sensor. However, there’s one catch: this new sensor occasionally produces negative integers representing faulty measurements. These have to be discarded and shouldn’t be taken into account.


The input is composed of two lines. The first line contains a single positive integer $n$ ($1 \le n \le 100$) that specifies the number of rainfall measurements collected by the University of Chicago Weather Service. The second line contains $n$ measurements, each separated by a single space. Each temperature is represented by an integer (which you can assume can be stored in a 32-bit integer)


If the input contains only negative measurements, you must print the text INSUFFICIENT DATA.

Otherwise, you must print a single integer: the average rainfall for that dataset (rounded down; i.e., only the integer part of the average), taking into account that negative measurements should be discarded.

Sample Input 1 Sample Output 1
5 10 15
Sample Input 2 Sample Output 2
-10 -6 -7
Sample Input 3 Sample Output 3
14 -5 39 -5 7