How to ‘fix’ a boson?

When people encounter issues with their car or microwave or any man-made object, they are more likely to put on their engineering hats and try to ‘fix’ the problem (oftentimes succeeding in doing so). I find this approach of dealing with ‘man-made’ things rather interesting because, this assumes that there is a solution that exists which can ‘discovered’ upon inspection. This is also common in GitHub repositories where tons of issues get reported regularly and, people are able to eventually resolve them by making changes to the code.  Why do we believe that a solution must exist and pursue seeking it? Is it because it was made by another human being and ergo, a solution must exist? I am not sure, but it seems to serve as a motivation to try, which is powerful.

Having said that, I find it hard to transpose the same ‘fix it’ philosophy that applies so well to everyday things, to the study of nature. What are we trying to fix in nature? Well, if we were to let go of our perception that a scientist is someone who works in a lab and looks into a microscope all day, it becomes a little easier to tackle this question.

For example, what if we picture a particle physicist as someone who is fluent in python, QFT, GR and Group theory;  works primarily on ‘front-end architecture of atomic particles’ and maintains a GitHub repo titled ‘my-standard-model’ where typical issues in the repo would be along the lines of ‘missing charge’, ‘Help:ValueError: Output contains NaN, infinity or a value too large for dtype(‘float64′).’ and so on; And every couple of months or so, some of these issues end up getting resolved and a PR follows? It sounds a little weird to say it out loud but, certainly conceivable. I guess what I am trying to get at is that, this feedback loop of ‘found issue in model/codebase –> report it –> try to fix it –> if resolved, close issue –> repeat’ might be more universally applicable than we thought.

 

 

 

Searching for something on the internet

In a previous post in this series titled ‘Keeping up with research‘, we talked about how it’s becoming increasingly difficult for scientists to identify quality research work as the number of research papers increased in their field. I think this is not a discussion that pertains only to research, but hints at a more fundamental problem in parsing information on the internet.

 

  • Frustrating google results

    • This blog post (along with these Reddit and ycombinator threads) describes how google search result relevance has plummeted to the extent where users end up adding something like “site:reddit.com” to the search term to yield better results.

 

  • Normalization

    • Normalization refers to social processes through which ideas and actions come to be seen as ‘normal’ and become taken-for-granted or ‘natural’ in everyday life.
    • What’s more scarier than not getting relevant results? Search engines acting as a means to normalize thinking in society.
      • This thread dwells more into this idea.

 

  • Understanding the ‘internet brain’

    • This psychology today series on the ‘internet brain‘ explores how our brain responds to the stimulus from the internet. Although this does not solve any of the above issues directly, it might help us identify patterns leading to manipulation.

 

Keeping up with research

This physics today editorial talks about a study which found that as the number of paper in a field increases, it becomes increasingly difficult for researchers to recognize innovative work and progress, as a result, stalls. This got me into thinking about different ways to mitigate this issue:

 

  • Parsing journal articles faster?
    • Some journals include a ‘Plain Language Summary of article’ or ‘Significance of work’ in the abstract section of the paper. These are usually provided by the authors and might help readers parse information faster (although with some clear loss in details). Here are a few journals which have implemented some form of this:

 

  • Asking AI to summarize

    • A while ago, I stumbled upon this python module called Sumy and an online web summarizer (ExplainToMe) which were able to summarize contents of HTML pages and documents into a few sentences. And quite frankly. their performance was surprisingly good. Another way to address this issue might be invoke the machines and use NLP to solve this issue.

 

  • Online journal clubs

    • A more “unbiased” alternative to summarizing articles might be through online discussion (via YouTube, Reddit, twitter, forums, etc). I find Fermat’s library implementation for such a routine quite ideal. They discuss one interesting research article per week but also allow others to annotate and highlight.