Searching for something on the internet

In a previous post in this series titled ‘Keeping up with research‘, we talked about how it’s becoming increasingly difficult for scientists to identify quality research work as the number of research papers increased in their field. I think this is not a discussion that pertains only to research, but hints at a more fundamental problem in parsing information on the internet.


  • Frustrating google results

    • This blog post (along with these Reddit and ycombinator threads) describes how google search result relevance has plummeted to the extent where users end up adding something like “” to the search term to yield better results.


  • Normalization

    • Normalization refers to social processes through which ideas and actions come to be seen as ‘normal’ and become taken-for-granted or ‘natural’ in everyday life.
    • What’s more scarier than not getting relevant results? Search engines acting as a means to normalize thinking in society.
      • This thread dwells more into this idea.


  • Understanding the ‘internet brain’

    • This psychology today series on the ‘internet brain‘ explores how our brain responds to the stimulus from the internet. Although this does not solve any of the above issues directly, it might help us identify patterns leading to manipulation.


What’s the storyline? :april20,2022


What’s the storyline? :april21,2022

Keeping up with research

This physics today editorial talks about a study which found that as the number of paper in a field increases, it becomes increasingly difficult for researchers to recognize innovative work and progress, as a result, stalls. This got me into thinking about different ways to mitigate this issue:


  • Parsing journal articles faster?
    • Some journals include a ‘Plain Language Summary of article’ or ‘Significance of work’ in the abstract section of the paper. These are usually provided by the authors and might help readers parse information faster (although with some clear loss in details). Here are a few journals which have implemented some form of this:


  • Asking AI to summarize

    • A while ago, I stumbled upon this python module called Sumy and an online web summarizer (ExplainToMe) which were able to summarize contents of HTML pages and documents into a few sentences. And quite frankly. their performance was surprisingly good. Another way to address this issue might be invoke the machines and use NLP to solve this issue.


  • Online journal clubs

    • A more “unbiased” alternative to summarizing articles might be through online discussion (via YouTube, Reddit, twitter, forums, etc). I find Fermat’s library implementation for such a routine quite ideal. They discuss one interesting research article per week but also allow others to annotate and highlight.