In today’s day and age we are swamped with tons of information. Everything on the internet is trying to catch our attention, as is this blog post. The situation gets worsens in infinite scrolling websites, where the unlimited amount of content makes sure that we are always playing a catch-up game. Fear of missing out (FOMO, as the savvy call it) is a real problem which makes us want to obsessively scroll through latest posts, pictures and videos on any social media website.

Imagine that you are scrolling through your twitter or facebook newsfeed. Have you ever been in a situation where there is just too much to see? And you keep scrolling down and down till you don’t even remember why or where you started in the first place.

The rate at which we are presented with information and limits to which we can process the information raises several conundrums. Are we capable enough to understand the information presented to us in such a rapid manner? How does our processing power declines when the amount of information overloads? What are the mental shortcuts we employ and how can they affect our understanding of the content? What is the half life (or decay rate) of the information acquired in such a manner?

To get an understanding of some of these important question in context of scrolling through twitter newsfeed, I used two approaches: Theoretical and Experimental. From the theoretical approach I calculated the minimum amount of time (T-min) required to process a prototypical tweet. The experimental approach helped me in tracking my own speed at which I consume the twitter newsfeed. If the average time spend per tweet from the experiment is lower than T-min, then I could argue that I am not able to fully understand the tweets and any sort of learning from that experience would be inaccurate, misleading and potentially disastrous.

## Theoretical Approach

The basic question I wanted to answer was the minimum amount of time that is needed to cognitively process the content of a tweet. So I took an example of a prototypical news tweet which is packed with information.

**Anatomy of a tweet**

**Words**: Author name, twitter handle, body text, hashtag name, image text.**Shapes**: Verified icon, chevron icon, comment icon, retweet icon, favourite icon, extra icon.**Digits**: Time when the tweet was posted, number of comments, number of retweets, number of favourites.**Images**: Profile picture of the author, Image associated with the link attached.**Special Characters**: @ and #

**Assumptions**

- Based on a few examples I took the word per line in desktop version of a tweet as around 10.
- Each tweet has roughly three lines of body text.
- The caption with the linked image also comes around to 3 lines.

So the total number of words in a tweet (N1) = 3 lines of body text + 2 word author’s name + 1 word twitter handle + 2 word hashtag + 3 lines of image caption

N1 = 3*10 + 2 + 1 + 2 + 3*10 = 65

Total number of shapes in a tweet (N2) = 6

Total number of digits in a tweet (N3) = 4

Total number of images in a tweet (N4) = 2

Total number of special characters in a tweet (N5) = 2

Now that we have completely broken down our tweet into its constituent elements, we are ready to calculate the time it takes for an average human to cognitively process all the elements. But before we actually do so, I want to give a quick introduction about the human-information processor. If you are interested in learning more you may read : The Psychology of Human Computer Interaction by Stuart Card, Thomas Moran and Alan Newell.

### The Model Human Processor

Human beings in the realm of human computer interaction work as information processing systems. We perceive stimulus presented on the screen through a perceptual processor, then understand it through our cognitive processor, and finally act using our motor processor. For eg., to checkout while online shopping we first perceive the buy button, understand that it is actually the button we are looking for, and then click on the button. Each of these processors have a cycle time i.e. time they require to complete the task.

Average Perceptual processor cycle time (Tp) = 100 [50-200] msec

Average Cognitive processor cycle time (Tc) = 70 [25-170] msec

Average Motor processor cycle time (Tm) = 70 [30-100] msec

Of course the cycle time for each of the processors would vary depending on the specific characteristic of the stimulus presented like magnitude, frequency, complexity, duration. It also depends on the practice level of the human involved.

We first perceive the words, icons, digits etc in the tweet and then process them. Thus for each element the time will be Tp + Tc. I have used specific values of Tc for different type of element. An image from page number 43 is given as a reference.

### Using the model

One cycle of cognitive processor is required to notice that we have completely read the tweet or are ready to move on to the next. It then instructs the motor processor, which takes time to process the instruction. Thus the process of scrolling down to the next tweet takes Tc + Tm amount of time.

Overall, the theoretically time taken to go through a tweet = T min = {Words * [Tp + Tc] + Shapes*[Tp + Tc] + Digits*[Tp + Tc]+ Images*[Tp + Tc] + Characters*[Tp + Tc]} + Tc + Tm

Plugging values in the above calculation and assuming Tc of image to be 100 msec we get 11,807 msec i.e. roughly 12 sec. This means that we have to spend at least 12 seconds per tweet in order to get a decent understanding of its contents. For now we are ignoring the complexities of different – topics, language, decision making, etc.; all of which can only increase the calculated value.

Hence, T-min = 12 sec

## Experimental Approach

This part is pretty straight forward. I timed myself while scanning screenshots of twitter feed and calculated the speed at which I read the tweets.

I used a chrome plugin called “Awesome Screenshot: Screen Video Recorder” to generate long screenshots of my twitter feed. The screenshots were 30,000 pixels in length. After 18 trials of going through newsfeed content the average time spend on reading one tweet was around 6 sec.

**Here is the complete data set of 18 trials**

Screen Number | Time Taken (sec) | Number of tweets | Time per tweet |

B1 – 1 | 154 | 33 | 4.7 |

B1 – 2 | 184 | 35 | 5.3 |

B1 – 3 | 160 | 36 | 4.4 |

B1 – 4 | 141 | 32 | 4.4 |

B2 – 1 | 238 | 37 | 6.4 |

B2 – 2 | 197 | 34 | 5.8 |

B2 – 3 | 188 | 31 | 6.1 |

B2 – 4 | 228 | 34 | 6.7 |

B3 – 1 | 244 | 44 | 5.5 |

B3 – 2 | 199 | 35 | 5.7 |

B3 – 3 | 199 | 31 | 6.4 |

B3 – 4 | 248 | 32 | 7.8 |

B3 – 5 | 211 | 33 | 6.4 |

B4 – 1 | 228 | 40 | 5.7 |

B4 – 2 | 224 | 40 | 5.6 |

B4 – 3 | 258 | 38 | 6.8 |

B4 – 4 | 248 | 44 | 5.6 |

B4 – 5 | 232 | 35 | 6.6 |

Average = 5.883 |

## Analysis

Now that we have the data set of 18 tweets and a benchmark time, as calculated by theoretical method, we can use rigorous statistical techniques to ascertain whether there is a significant different in the mean time from samples and the benchmark time. Let’s do this formally.

** Null Hypotheses** (H0) : The subject is giving proper time to read the tweets. This can be written that the sample mean is either equal or greater than the benchmark score.

** Alternate Hypothesis** (H1) : The subject is not giving enough time to read the tweets. This can be written that the sample mean is less than the benchmark score.

To answer the hypotheses we will conduct a one sample t test. The t test relies on a obtained t statistic and checks if that is greater or less than the critical t statistic.** If the obtained t statistic for the sample scores is greater than critical t statistic then under the normal distribution for t values curve the area will be lesser than the area for t critical.**

The area under the curve indicates the probability of obtaining the t value or higher than that due to sheer chance. Comparing the two probabilities of t obtained and t critical we can say that if p (t obt) is less p (t crit) then there is little evidence than chance alone is responsible for such a t value, which could only mean that the sample mean value differs from the compared benchmark value in reality and not merely due to randomness.

If t obt > t crit

then p obt < p crit

hence reject H0

Using this online tool I calculate the t obtained as **29.6020** with degree of freedom (df) as n-1 = 18-1 = 17. Now, t critical for df = 17 and alpha value = 0.01 (one tailed) is **2.567**. Clear t obt is greater than t crit at a significance level of 0.01. Hence we reject the null hypothesis, which states that the subject is giving proper time to read the tweets.

Some of the other results of t test

**Confidence interval:**

The hypothetical mean (benchmark value) is 12.000

The actual mean is 5.883

The difference between these two values is -6.117

The 95% confidence interval of this difference: From -6.553 to -5.681

**Intermediate values used in calculations:**

t = 29.6020

df = 17

standard error of difference = 0.207

## Result

Since we have rejected the null hypotheses, we have to accept the mutually exclusive and exhaustive alternate hypothesis which says that the subject is not giving enough time to read each tweet.

Clearly the average time given to a tweet, nearly 6 seconds is far less than the calculated minimum time to be given i.e. 12 seconds. We also checked the significance level of this using t test and now we can be very sure that the result obtained if not due to chance.

Thus, the subject i.e. me is not equipped to fully understand the complexities of the tweets at this reading speed .

## Discussion

- Now you may say that not all the tweets in the feed would be similar to the ones I considered. Point taken! So, I recalculated the T-min with a “lean” tweet with no link image and image captions. The resultant T-min was 7.2 seconds [You can verify this value using the theoretical approach described above]. This value is still greater than the average value from the experiment, meaning that the test subject is still not giving enough time to understand the content of the tweets.
- I have only included my own experimental data here as I do not have access to a lab and participants. Thus, we cannot generalise this result for everyone at this stage.
- The T-min used does not encompass the time taken to critically judge the tweets and come up with reasoning to refute the factual inaccuracies (if at all) present in the tweet. Those higher order functions will take more time.

## Future Work

As we saw, the rate at which we are consuming information is in no way good enough for us to understand the information. By not understanding and evaluating the information we consume, we leave ourselves susceptible to the dangers of fake news, manipulative posts, incorrect advertisement and all types of propaganda.

To judge the ability to catch false information I propose a second experiment where in the first round I will measure the casual reading speed of an individual. In second round I will ask them to judge whether the information is correct or not. The hypothesis is that the reading speed of the first round will be greater than the second one. Additionally when the shortcuts like author name, verified symbol are blurred out people will have difficulty in judging whether the content is fake or not.

## Bibliography

- https://askdatascience.com/231/how-is-the-model-human-processor-woking
- The Psychology of Human Computer Interaction by Stuart Card, Thomas Moran and Alan Newell
- https://chrome.google.com/webstore/detail/awesome-screenshot-screen/nlipoenfbbikpbjkfpfillcgkoblgpmj?hl=en