Why the public needs to know more about AI – An interview with Varsh Anilkumar

What is AI and how present is it in our lives?

AI is a broad term, but what it boils down to is using technology and mathematics to create use cases to make things more efficient.

The prevalence of use cases from the moment you get up, or even when you are asleep makes public knowledge about AI important. Examples of consumer use cases can be recommendations on how you buy products online, order food, hail a cab, find mobility options, the wording for your email, and the grammar you should use.

When you apply for a loan from a bank, the bank uses AI services to power that decision. In HealthTech, there AI is used from when you book your appointment to diagnostics, as well as drug discovery and delivery.

Use cases vary in implications. If you get the wrong recommendations for a restaurant, that is not such a big deal. If you miss out on a bank loan or get a wrong diagnosis, it can be life-altering or even life-threatening.

Why is public awareness about AI so important to you?

When you look at a large number of use cases, it is clear that people have to know how AI works and be part of the design from both a technical and governance perspective. 

There are two aspects of why public awareness of AI is important.

Governance: When it comes to governance, we can look at AI in FinTech and HealthTech which always have an aspect of governance. Governance is introduced usually through policymakers. If the public understands how AI is built and the use cases it is built for or offered, then the public is better equipped to influence AI governance. If people know about AI and how it is used, they can ask more informed questions to policymakers and influence the design of AI systems, as well as the policies that govern the use cases. For example, AI can be involved in determining credit scores. The historical data from contexts that have since changed can be used to determine a credit score today. 

Feedback: Being able to provide actionable feedback to people who built the AI is another critical point. It is important that the end user is involved in the design of AI products. There should be transparency about how AI works, and users should know how they can best use AI for their own benefit. Back to the example above, people might demand that context is being added to balance out bias in AI-informed credit rating models.

Governance has a big impact on AI because the public can give feedback through different mechanisms such as social media, feedback forms, advocacy groups, etc. However, the people making the AI have to listen in order for that feedback to have an impact.

The public can choose the right policymakers. If people know about the different components of AI and its impact on their day-to-day life, people will know how to ask the right questions and elect leaders who drive governance in a way that benefits people.

Why is the disconnect between the general public and the makers of AI a problem?

Bias: AI applications that are biased toward specific end-user groups and are not representative of the whole, such as AI which is used in designing safety features for automobiles have to factor in datasets from all user groups encompassing key attributes for the AI use case which can be height, weight, and other key attributes without which the safety feature can malfunction for the user groups whose data points were not factored into the design.

Stifled innovation: By creating the disconnect, you are by default excluding people of specific backgrounds from being able to contribute to AI applications. Having a lack of these perspectives decreases innovation. Diversity has a positive correlation with innovation.

Inefficient governance structures: With any digitalization, data is always utilized to make the use case more efficient. If the public cannot choose the right policymakers and provide the proper feedback, you come up with governance structures that are based on bad feedback and the wrong questions asked by people. This dynamic has massive downstream implications, such as latencies in HeathTech applications that could, for example, impact access to improved diagnostics. There could be an impact on economic growth in a region and the economic activity of society as a whole. 

What are some of the root causes for a disconnect between people who build AI and the people who use it?

Knowledge gap: AI is still perceived as magical by some people. The knowledge gap exists because a lot of companies don’t try to make the technologies that build AI as available as other domains. There are forums like Kaggle. However, if you look at web technologies in comparison, there are so many more open-source frameworks that you can use to build web applications. This is slowly changing though, still, findability and visibility of these open-sourced resources are key in decreasing this knowledge gap.

Fear of math: Math is a big component used to build the fundamental components of AI, but you don’t need to be an expert at mathematics to understand how AI works as an end user. If you have an understanding of how math works at a high school level, you can understand the building blocks of AI.

Community component: A lot of companies in deep tech are closed about how they build technology in AI. They have visibility for researchers in AI, but a lot of the end users are excluded from this community. 

Wrong communication channels are chosen: Companies often communicate to the public but choose the wrong channels. For example, rather than going with YouTube, they stick to their domain. Unless you follow the companies, the content is not findable. Companies can use AI by way of the YouTube algorithm to make their content about AI more findable.

Which elements of mathematics are important to grasp in order to understand AI?

Probability allows you to understand and estimate the cause and effect of data.

Linear algebra allows you to represent data in an efficient manner to build AI models.  

What key steps can be taken to enable better public understanding of AI?

Accessible thought leadership: Andrew Huberman is a neuroscience researcher who shares a lot of his work through his podcast and makes it accessible to the general public by breaking down his research in a way that people can relate to. If somebody wants to dig deep, he provides resources for how people can do their own research. 

Increasing findability: Communicating about AI in findable channels for the general public as discussed above. 

Education should be up-to-date about what is happening.

How can education be more up to date and improve understanding of AI?

Relevance: Education is focused on high-level concepts, but there should be a component of how these technologies are used in the current day and age. 

Early AI education: AI should be introduced at an earlier stage as compulsory rather than at a later stage as an elective, which means that most people who are already going down the technology track learn about AI including those who may not go down a technology track.

Intuitive mathematics education: Lack of focus on building intuition often causes a lack of interest in mathematics. Intuition is the foundation that we develop in a lot of fields if we want to specialize in that field or domain. Intuition for the fundamentals of the domain allows us to have fluidity and flexibility with how various concepts or applications are built on top of the fundamentals which also leads to easier research and exploration in the domain as well. 

What is an example of intuitive mathematics education?

Let us take an example from calculus.

Understanding the essence of calculus can be considered with the example of deriving the area of a circle which then can be extended to the area under a curve and much more complex structures, but understanding the essence by considering the mentioned example allows one to build the foundation which then develops the intuition. In this Youtube playlist, Grant Sanderson beautifully explains the essence of calculus in a manner that develops the intuition for calculus in the viewer. The concept of math equipping you with powerful tools in your arsenal for other applied fields like AI and engineering is missed if the intuition is not developed which leads to the learner not being able to relate to math in this manner and can also create a sense of fear and uncertainty.

How would better public awareness impact AI? 

If AI was created through more open-source technologies and more people would know how to contribute. People from more diverse backgrounds would be able to contribute. The impact would be: 

  • Decrease in bias:  Data points that are more representative of humanity would enable us to create better and more humane AI.  
  • Increase innovation: Once you get a more diverse perspective on how AI is being built, it fosters better innovation.

Having the general public involved in both the technological and governance aspect of AI would enable much larger and better AI use cases, as well as positively governed technology that would be in the best interests of the end users and representative of humanity as a whole.

Who is Varsh Anilkumar?

Varsh Anilkumar is an engineer and entrepreneur passionate about solving problems using
technology.

He has worked in leading AI and data engineering projects for a wide range of use cases
ranging from e-commerce to health tech.

Varsh Anilkumar is also the Founder and director of a health tech startup BlockMMP. BlockMMP
is based in the US and offers technological solutions to help out in the ongoing Opioid crisis
rampant in North America and a few other parts of the world. BlockMMP offers a secure
prescription dosage tracking software platform that enables streamlined dosage tracking and
administration of medications used to treat patients with Opioid use disorders. Initially funded by
an NIH grant for research and development, the startup has scaled out the platform to function
end to end to provide better care and trust for patients suffering from Opioid use disorders.
Varsh is also working on a computational platform that utilizes AI for cancer research in
collaboration with a few researchers in the field.


He is passionate about democratizing technical education specifically in the field of AI and Data
Science engineering and is actively involved in mentoring engineering students at Boston
University and professionals trying to make a transition into tech through other organizations
such as the Global AI hub.

Varsh actively advocates for a rational collaboration between technology developers and
policymakers to enable fair, governed, and unbiased technology service offerings. He is
currently writing a book that emphasizes the importance of a clear understanding of AI to its end
users which empowers them to be more actively involved in the design and policy aspects of AI
by having the ability to ask informed questions to the tech and policy architects of AI.

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert