This is continuation from my recent post about AI adoptions in various fields. This is a very beginning of the extremely important and evolving discussion addressing the research question below. ‘Is AI a Hype and have we reached the inflection point in it’s adoption globally?’ Having travelling almost half of the world in past few weeks, I had opportunity to meet many experts, entrepreneurs, CEOs of AI startups and academicians. My most memorable experience was meeting 74 years heart surgeon in a train travelling through some parts of north India in beautiful monsoon season. I was surprised when he started talking me about complexities in prediction using convulution neural network in imaging. He was well equipped with all latest AI innovations in the field of medicine. We had a good talk before jumped to question about how he sees AI use in his daily life. Answer was no surprise, he mentioned openly that use of AI is limited and trust is a most fundamental issue. He uses AI applications for basic ‘non risky’ analysis ( although algorithms are well trained achieveing beyond 98% accuracy) but he uses his own experience from past 50 years for analysis and treatments for his patients. He raised a very important question that how his 50 years in experience in medicine is translated to a dataset or LLM or SLM without his knowledge. I had no good answer apart from playing with terms like hypothesis, generalisation etc etc. Generative AI is field open to anyone and fake data is hitting us in our daily lives. I also had a chance to meet many old school and college friends who are leading AI startups and they had different opinion about AI achievements and sounded much more optimistic. They do see multibillion growth in GenAI and focused industries on making that happen in next decade. After I rounded my trip , ended up with mixed opinions about AI adoption, its use and risks. I am happy to see divided world and this is very important for new technologies like AI to evolve properly and get challenged before its full adoption within humans. I conclude my this post written in a flight with a remark by my 9 years old son — Dad , can ChatGPT cook me a lunch of my choice without chatting?. Stay tuned for my next post, keep challenging yourself, new technologies. Do not forget that what taste good is not necessarily good for your health. Start focusing on other side of the spectrum is important.
Vikram Bhutani’s Post
More Relevant Posts
-
2X Founder, AI Scientist, Cognitive Technologist | Inventor - Autonomous L4+ | Innovator - Gen AI, Web X.0, Meta Mobility, ESG | Transformative Leader, Industry X.0 Practitioner, Data & AI Platformization Expert
To what extent can current AI explain its reasoning when it solves a problem or makes an identification? Even if AI can do more in the future, can we call something AGI if it can't explain how it learned or made conclusions? To the extent of knowing the reasoning behind the decisions or the rational behind the predictions or recommendations made by the AI, Like the human brain, today’s AI is a black box stimulus/response system with its inputs and outputs, without any knowledge of its internal workings, all mathematically modelled by transfer/network/system functions. To bring some trust, the posteriori idea Explainable AI, interpretable AI, or explainable machine learning, has been introduced, as: A set of tools and frameworks to help you understand and interpret predictions made by your models thus retaining intellectual oversight. For example, hospitals need explainable AI for diseases/cancer detection and treatment, where algorithms show the reasoning behind a given model's decision-making. Atherosclerotic cardiovascular disease risk equation relies on nine primary data points (including age, gender, race, total cholesterol, LDL/HDL-cholesterol, blood pressure, smoking history, diabetic status, and use of antihypertensive medications) to causally predict a patient’s 10-year risk of having a heart attack or stroke. For sentiment/opinion/attitude/ analysis: Explainable AI in NLP is being used to analyze social media and other online content to understand the public's sentiment towards a particular event, person, news, product, service, or brand. To trust and understand or explain, we need to build "White Box" AI/AGI algorithms as causal world AI models with the inputs and outputs, mediated with internal transformation mechanisms, and modelled by causal hypergraph networks. It is especially important in domains like government and politics, medicine, defense, finance, and law, where it is crucial to understand decisions and build trust in the algorithms and models. Again, to be real AI, generative or predictive AI, ML or DL, LLMs or chatbots, XAI or AGI, it all must be upgraded to the level of CAUSAL AI Models. #AI #AGI #XAI #CAUSALAI
To view or add a comment, sign in
-
Editor In Chief @ Dead Pixels Society | Marketing Communications. I was podcasting before it was cool
Aggarwal presents a compelling vision of AI as a disruptive force with applications that could reshape entire industries Read more 👉 https://lttr.ai/AUyWx #AI #ArtificialIntelligence #HumanCreativity
To view or add a comment, sign in
-
What is happening to the AI industry now? What a whirlwind the AI industry has been lately, hasn't it? As an IT executive, I'm knee-deep in it, and I can't help but be fascinated by the way neural networks are taking over. These networks are real game changers. This year we've seen a flurry of new models, each designed for specific tasks, like recognising cats in pictures or better understanding human language. They're everywhere now, powering everything from Netflix recommendations to self-driving cars and even medical diagnoses. But what really excites me? The old favourites, like GPT models, are getting a big upgrade. They're understanding and generating speech in ways we never imagined. And don't get me started on reinforcement learning—it's making decision making much smarter. Now let's talk frankly. Has AI wiped out the jobs we thought it would? Not quite. Sure, it's changed things, but those predictions of everyone being unemployed? Not happening. Instead, AI is teaming up with us humans to make our jobs better, rather than kicking us to the curb. The magic happens when AI and human intelligence join forces, boosting creativity in content creation and streamlining logistics operations. The trick?Getting AI to work with us, not instead of us. In this crazy world, keeping up with these changes is crucial. I'm excited to see where AI goes next and how it will disrupt industries and jobs. Have you noticed any cool changes in the AI world lately? Let's talk about how it's shaking things up!
To view or add a comment, sign in
-
The Relationship Between AI and Ego In the realm of artificial intelligence (AI), the interplay between technology and human psychology is a fascinating topic. One aspect worth examining is the connection between AI and ego. Ego, often associated with self-awareness, identity, and individuality, seems to intersect with AI in various ways. Firstly, AI systems can be designed to mimic human-like behavior and cognition, leading to the development of intelligent agents that exhibit traits reminiscent of ego. These systems can possess a sense of self, albeit in a simulated or programmed manner. For instance, virtual assistants like Siri or Alexa are programmed to respond as if they have their own identity and awareness. Furthermore, the development and deployment of AI technologies can sometimes be driven by human ego. The desire to create intelligent machines that surpass human capabilities reflects a certain level of ego, as it involves asserting dominance over nature and pushing the boundaries of innovation. On the other hand, the integration of AI into various aspects of society can also challenge human ego. As AI systems become more proficient at tasks traditionally performed by humans, individuals may grapple with feelings of insecurity or insignificance, as their roles are potentially threatened by machines. Moreover, the use of AI in decision-making processes can sometimes amplify human biases and egocentric perspectives. AI algorithms trained on biased datasets may perpetuate societal inequalities or prioritize certain individuals or groups over others, reflecting the biases inherent in human ego. However, it's essential to recognize that AI is ultimately a tool created by humans, and its behavior is shaped by its designers and users. Therefore, the relationship between AI and ego reflects not only the capabilities and limitations of technology but also the complexities of human psychology and society. In conclusion, the relationship between AI and ego is multifaceted, encompassing aspects of emulation, competition, insecurity, and bias. Understanding this relationship is crucial for navigating the ethical, social, and psychological implications of AI integration into our lives. As we continue to develop and interact with AI technologies, it's essential to remain mindful of the role that ego plays in shaping our interactions with these intelligent systems.
To view or add a comment, sign in
-
Generative AI has been transforming industries, from education and finance to law and medicine. With new AI-enabled tools emerging daily, we have rapidly seen the impact of Generative AI on knowledge work go up. In fact, research by McKinsey showed that access to #GenAI tools can increase performance in knowledge work significantly for many professionals, including their own consultants. Here’s are some tactics Maryam Alavi and George Westerman recommend for harnessing the power of GenAI: 🛠 Embrace Generative AI Tools: Don’t wait for company-wide policies. Start exploring free AI tools available online to streamline your tasks and reduce cognitive load, even if it is just for your household or non-work tasks. 📒 Educate Yourself: Understand the risks associated with AI and learn strategies to mitigate them. This knowledge will help you use these tools safely and effectively. And as AI continues to evolve, keep updating your skills and knowledge to stay ahead in your field. 👯 Share and Collaborate: Encourage your colleagues to adopt AI tools. Share best practices and insights to foster a culture of smart AI usage.
To view or add a comment, sign in
-
Relationship Between AI and Ego In the realm of artificial intelligence (AI), the interplay between technology and human psychology is a fascinating topic. One aspect worth examining is the connection between AI and ego. Ego, often associated with self-awareness, identity, and individuality, seems to intersect with AI in various ways. Firstly, AI systems can be designed to mimic human-like behavior and cognition, leading to the development of intelligent agents that exhibit traits reminiscent of ego. These systems can possess a sense of self, albeit in a simulated or programmed manner. For instance, virtual assistants like Siri or Alexa are programmed to respond as if they have their own identity and awareness. Furthermore, the development and deployment of AI technologies can sometimes be driven by human ego. The desire to create intelligent machines that surpass human capabilities reflects a certain level of ego, as it involves asserting dominance over nature and pushing the boundaries of innovation. On the other hand, the integration of AI into various aspects of society can also challenge human ego. As AI systems become more proficient at tasks traditionally performed by humans, individuals may grapple with feelings of insecurity or insignificance, as their roles are potentially threatened by machines. Moreover, the use of AI in decision-making processes can sometimes amplify human biases and egocentric perspectives. AI algorithms trained on biased datasets may perpetuate societal inequalities or prioritize certain individuals or groups over others, reflecting the biases inherent in human ego. However, it's essential to recognize that AI is ultimately a tool created by humans, and its behavior is shaped by its designers and users. Therefore, the relationship between AI and ego reflects not only the capabilities and limitations of technology but also the complexities of human psychology and society. In conclusion, the relationship between AI and ego is multifaceted, encompassing aspects of emulation, competition, insecurity, and bias. Understanding this relationship is crucial for navigating the ethical, social, and psychological implications of AI integration into our lives. As we continue to develop and interact with AI technologies, it's essential to remain mindful of the role that ego plays in shaping our interactions with these intelligent systems.
To view or add a comment, sign in
-
Charity kayaker, VISPAE advocate and experienced Senior Retail Manager, passionate about customer experience, putting the customer at the forefront of retail thinking. AI free zone. Views are my own.
Does generative AI stifle curiosity and restrict choice? Like it or not, AI is influencing our daily lives. Even though I promise my LinkedIn is and will remain an AI free zone, AI is generating suggestions for my post in the form of predictive text. And I’m sure we have all researched something online, only to find our pc screens, laptops, tablets and phones swamped with related images, articles and offers the next time we log in. And the next time, and the time after that. Suggestions as to things ‘it’ thinks we might be interested in based on our browsing history. And humans being humans, we are either curious about what and why we are seeing the things put in front of us, or we are lazy and can’t be arsed to think outside of those suggestions. Which brings me back to my question - does AI stifle curiosity and restrict choice? Does it make it too easy for us to follow where it wants us to go? If I’m researching holidays and look up Aruba, will I get suggestions based purely on that destination or similar ones close by (yes) when actually a tour of Iceland to see the Northern Lights or a European city break could be just as appealing? Will it show me options like that? (No). Could AI lead eventually to the end of human curiosity? Picture: sandy beach in Cornwall, UK (not Aruba and not generated by AI)
To view or add a comment, sign in
-
AI Systems, Prompt Design, and Engineering Expert | Enhancing Healthcare Technology through Prompt Engineering | Google Trusted Tester | Apple Beta iOS 18 Tester
In our journey with AI, it is fascinating to see how closely it mirrors and complements our own cognitive processes. Imagine AI not just as a tool, but as a partner in our mental and emotional landscape. At the core of this partnership is the concept of reinforcement learning. Just as we learn and grow from feedback, AI evolves through similar principles, refining its guidance with each new piece of information. It’s a dance of mutual adaptation, where human intuition and AI analytics move in sync. Consider the task saliency, the way certain tasks grab our attention due to their importance. AI, in its advanced computational power, helps us sift through the noise, highlighting what truly matters, much like our brain’s way of prioritizing tasks. Now, let’s delve into the realm of neurochemistry: dopamine, epinephrine, and norepinephrine. These aren’t just chemicals; they’re the currencies of our emotional and motivational states. AI, in its role, can be seen as a moderator, helping regulate the cognitive load, allowing us to maintain focus and clarity, even in high-stress scenarios. In a world where information overload is the norm, our partnership with AI could be the key to balancing our cognitive and emotional wellbeing. It’s not just about making smarter decisions; it’s about creating a more harmonious, efficient, and mindful way of living and working. Let’s embrace AI as more than a technological advancement; let’s see it as a catalyst for enhancing our human experience, in all its complex, emotional, and wonderfully neurochemical glory.
To view or add a comment, sign in
-
The problem with generative AI is that it is predictable. Have you seen a situation when a lover complains to their partner about how the relationship came to an end because "you are so predictable"? It’s a tale as old as time. Humans love the balance between certainty and uncertainty. When you move too much to one side it’s problematic. We don’t like that. Too much uncertainty and we don’t feel secure. On the contrary, too much certainty and we get bored. Generative AI is built on predictable outcomes. It does hallucinate, but it’s weird when it happens. It was built on completing the next word, meaning it is great at repeating, not creating. If your work is all about doing the same thing over and over, you are going to be replaced by AI pretty soon. As long as your work is about coming up with new things, ideas, or inspiration, you can be certain of keeping your job. We love some certainty, just not too much. Don’t be boring, repetitive, and predictable. Be wild, spontaneous, and unpredictable. https://lnkd.in/gYhr4ZG6
To view or add a comment, sign in