How to be a good ‘parent’ to AI | TEDxUofW

How to be a good ‘parent’ to AI | TEDxUofW

To Dr. Chirag Shah (Information Matters co-founder), AI is not just another promising technology. As an industry expert who has worked on AI for years, it is almost a child to him – one that needs nurturing, guidance, and foundational education. 

In his first TED talk at the University of Washington, he drew parallels between parenting and building responsible and accountable AI. He spoke of a definite future when we (technologists and society) will need to let go of what AI is and does, similar to how children grow up and become independent decision-makers. But, what’s key and common in both is that “the decisions we make today will impact not just the future of AI, but that of humanity.”

“Common sense is a manifestation of our values, values that we have developed over hundreds of years. These (AI) systems lack these value judgments.” That’s why he says, it’s important to know “how” the AI reaches its conclusions before it is deployed to carry out tasks or missions.

Listen to his talk to learn more about why we need AI education for us all, and how important transparency is in order for us to build better AI systems, because “AI is our collective child, and it’s growing up.”

𝐀𝐝𝐝𝐫𝐞𝐬𝐬𝐢𝐧𝐠 𝐖𝐞𝐚𝐤 𝐃𝐞𝐜𝐢𝐬𝐢𝐨𝐧 𝐁𝐨𝐮𝐧𝐝𝐚𝐫𝐢𝐞𝐬 𝐢𝐧 𝐈𝐦𝐚𝐠𝐞 𝐂𝐥𝐚𝐬𝐬𝐢𝐟𝐢𝐜𝐚𝐭𝐢𝐨𝐧 𝐛𝐲 𝐋𝐞𝐯𝐞𝐫𝐚𝐠𝐢𝐧𝐠 𝐖𝐞𝐛 𝐒𝐞𝐚𝐫𝐜𝐡 𝐚𝐧𝐝 𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐯𝐞 𝐌𝐨𝐝𝐞𝐥𝐬 #IJCAI2023

𝐀𝐝𝐝𝐫𝐞𝐬𝐬𝐢𝐧𝐠 𝐖𝐞𝐚𝐤 𝐃𝐞𝐜𝐢𝐬𝐢𝐨𝐧 𝐁𝐨𝐮𝐧𝐝𝐚𝐫𝐢𝐞𝐬 𝐢𝐧 𝐈𝐦𝐚𝐠𝐞 𝐂𝐥𝐚𝐬𝐬𝐢𝐟𝐢𝐜𝐚𝐭𝐢𝐨𝐧 𝐛𝐲 𝐋𝐞𝐯𝐞𝐫𝐚𝐠𝐢𝐧𝐠 𝐖𝐞𝐛 𝐒𝐞𝐚𝐫𝐜𝐡 𝐚𝐧𝐝 𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐯𝐞 𝐌𝐨𝐝𝐞𝐥𝐬 #IJCAI2023

Preetam Dammu, Yunhe Feng and Chirag Shah 

In an era where machine learning (ML) technologies are becoming more prevalent, the ethical and operational issues surrounding them cannot be ignored. Here’s how we tackled this challenge:

💡 The Problem:
ML models often don’t perform equally well for underrepresented groups, placing vulnerable populations at a disadvantage.

🌐 Our Solution:
We leveraged web search and Generative AI to improve the robustness and reduce bias in discriminative ML models.

🔍 Methodology:
1. We identified weak decision boundaries for classes representing vulnerable populations (e.g., female doctor of color).
2. We constructed search queries for Google and generated text for creating images with DALL-E 2 and Stable Diffusion.
3. We used these new training samples to reduce population bias.

📈 Results:
1. Achieved a significant reduction (77.30%) in the model’s gender accuracy disparity.
2. Enhanced the classifier’s decision boundary, resulting in fewer weak spots and better class separation.

🌍 Applicability:
Although demonstrated on vulnerable populations, this approach is extendable to a wide range of problems and domains.

https://media.licdn.com/dms/document/media/D561FAQHC2lEiwd7tXg/feedshare-document-pdf-analyzed/0/1692451349751?e=1694044800&v=beta&t=f5YjwyHd_KCSHH8ckfZcOrbyLCIlFG4u-nECZSFTX1o

AI information retrieval: A search engine researcher explains the promise and peril of letting ChatGPT and its cousins search the web for you

AI information retrieval: A search engine researcher explains the promise and peril of letting ChatGPT and its cousins search the web for you

Author: Chirag Shah Professor of Information Science, University of Washington

Big red robot toy teaches his class ,
Charles Taylor/iStock via Getty Images

The prominent model of information access before search engines became the norm – librarians and subject or search experts providing relevant information – was interactive, personalized, transparent and authoritative. Search engines are the primary way most people access information today, but entering a few keywords and getting a list of results ranked by some unknown function is not ideal.

A new generation of artificial intelligence-based information access systems, which includes Microsoft’s Bing/ChatGPTGoogle/Bard and Meta/LLaMA, is upending the traditional search engine mode of search input and output. These systems are able to take full sentences and even paragraphs as input and generate personalized natural language responses.

At first glance, this might seem like the best of both worlds: personable and custom answers combined with the breadth and depth of knowledge on the internet. But as a researcher who studies the search and recommendation systems, I believe the picture is mixed at best.

AI systems like ChatGPT and Bard are built on large language models. A language model is a machine-learning technique that uses a large body of available texts, such as Wikipedia and PubMed articles, to learn patterns. In simple terms, these models figure out what word is likely to come next, given a set of words or a phrase. In doing so, they are able to generate sentences, paragraphs and even pages that correspond to a query from a user. On March 14, 2023, OpenAI announced the next generation of the technology, GPT-4, which works with both text and image input, and Microsoft announced that its conversational Bing is based on GPT-4.

Thanks to the training on large bodies of text, fine-tuning and other machine learning-based methods, this type of information retrieval technique works quite effectively. The large language model-based systems generate personalized responses to fulfill information queries. People have found the results so impressive that ChatGPT reached 100 million users in one third of the time it took TikTok to get to that milestone. People have used it to not only find answers but to generate diagnosescreate dieting plans and make investment recommendations.

Read Full Article

On Tech Ethics Podcast – The Impact of ChatGPT on Academic Integrity with Professor Chirag Shah

On Tech Ethics Podcast – The Impact of ChatGPT on Academic Integrity with Professor Chirag Shah

What is ChatGPT and why does it matter? Here's everything you need to know  | ZDNET

“Even then, people are still finding things that are biased, and to some people, they are unable to distinguish this information because the source cannot be validated.” – Chirag Shah

Discusses the impact of AI on academic integrity, with a focus on ChatGPT.

Our guest is Chirag Shah, Ph.D. Chirag is a Professor of Information and Computer Science at the University of Washington. He is the Founding Director of InfoSeeking Lab and Founding Co-Director of the Center for Responsibility in AI Systems & Experiences (RAISE). He works on intelligent information access systems focusing on fairness and transparency.

Listen on Apple Podcast

Listen on Spotify

AI-generated information carries risks along with promise

AI-generated information carries risks along with promise

By Professor Chirag Shah, UW Information School Monday, February 13, 2023

Image generated using DALL-E


Artificial intelligence platforms such as ChatGPT have caught the attention of researchers, students and the public in recent weeks. For this dean’s message, I have invited Chirag Shah, an Information School professor and expert in AI ethics, to share his thoughts on the future of generative AI.

— Anind K. Dey, Information School Dean and Professor

ChatGPT has caused quite an uproar. It’s an exciting AI chat system that leverages huge amounts of text processing to provide short, natural-sounding responses as well as complete complex tasks. It can write long essays, generate reports, develop insights, create tweets, and provide customized plans for various goals from dieting to retirement planning. 

Amid the excitement about what ChatGPT can do, many have quickly started pointing out issues with its usage. Plagiarism and bias are the most immediate concerns, and there are many long-term challenges about the implications of such technology on educational processes, jobs, and even human knowledge and its dissemination at a global scale. 

We have entered a new era in which systems can not only retrieve the information we want, but generate conversations, code, images, music and even simple videos on their own. This is powerful technology that has the potential to change how the world works with information, and as with any revolutionary technology, its benefits are paired with risk and uncertainty.

Traditionally, we have had two types of systems to access information: direct and algorithmically mediated. When we read newspapers, we are accessing information directly. When we use search engines or browse through recommendations on Netflix’s interface, we are accessing algorithmically mediated information. In both categories, the information already existed. But now we are able to access a third type: algorithmically generated information that didn’t previously exist.

There could be great benefits to having AI create information. For example, what if an author working on a children’s book needed an illustration where astronauts are playing basketball with cats in space? Chances are, no system could retrieve it. But if the author makes a query to DALL-E, Imagen, or Stable Diffusion, for example, they will get a pretty good response that is generated rather than retrieved.

Generated information can be customized to our given need and context without our having to sift through sources. However, we have little understanding of how and why the information was provided. We can be excited about an all-knowing AI system that is ready to chat with us 24/7, but we should also be wary about being unable to verify what the system tells us. 

What if I asked you which U.S. president’s face is on the $100 bill? If you said “Benjamin Franklin,” then you fell for a trick question. Benjamin Franklin was a lot of things — a Founding Father, scientist, inventor, the first Postmaster General of the United States — but he was never a president. So, you’ve generated an answer that doesn’t or shouldn’t exist. Various pieces of otherwise credible information you know, such as presidents on dollar bills and Benjamin Franklin as a historical figure, gave you a sense of correctness when you were asked a leading question. 

Similarly, algorithmically generated information systems combine sources and context to deliver an answer, but that answer isn’t always valid. Researchers are also concerned that these systems invariably can’t or won’t provide transparency about their sources, reveal their processes, or account for biases that have long plagued data and models in AI.

Big tech companies and startups are quickly integrating such technologies, and that raises many pressing questions. Will this be the new generation of information access for all? Will we or should we eliminate many of the cognitive tasks and jobs that humans currently do, given that AI systems could do them? How will this impact education and workforce training for the next generation? Who will oversee the development of these technologies? As researchers, it’s our job to help the public and policymakers understand technology’s implications so that companies are held to a higher standard. We need to help ensure that these technologies benefit everyone and support the values we want to promote as a society.

Oh, and if that Benjamin Franklin question tripped you up, don’t feel bad. ChatGPT gets it wrong too!

Read Article

Congratulations to our lab director Dr. Chirag Shah for this milestone recognition

Congratulations to our lab director Dr. Chirag Shah for this milestone recognition

Chirag Shah

ACM honors iSchool’s Shah as Distinguished Member

UW Information School Professor Chirag Shah is among 67 global scholars recognized this year as Distinguished Members of the Association for Computing Machinery (ACM) for their outstanding contributions to the computing field. 

The ACM is the world’s largest computing society. It recognizes up to 10 percent of its worldwide membership as distinguished members based on their professional experience, groundbreaking achievements and longstanding participation in computing. The ACM has three tiers of recognition: fellows, distinguished members and senior members. The Distinguished Member Recognition Program, which honors members with at least 15 years of professional experience, recognized Shah for his work at the intersection of information, access and responsible AI. Shah expressed his gratitude and appreciation for the award. 

“I’m incredibly grateful for all the support I’ve received from everyone. It’s a very humbling experience,” said Shah. 

Shah has contributed a great deal of research related to people-centered information access and examining how biases and issues of discrimination that are present within information systems can be counteracted. One of Shah’s significant contributions to the iSchool has been co-founding Responsibility in AI Systems and Experiences (RAISE).

Read more: https://ischool.uw.edu/news/2022/12/acm-honors-ischools-shah-distinguished-member?fbclid=IwAR10RqqU2ltZZ4P-OFlZUjlc1i62UYy1TgzlRRwJ5cqNzUtGZTVBP1O_TGY

Our Lab Director quoted in: Why Meta’s latest large language model survived only three days online

Our Lab Director quoted in: Why Meta’s latest large language model survived only three days online

space bears floating in space

“I am both astounded and unsurprised by this new effort,” says Chirag Shah at the University of Washington, who studies search technologies. “When it comes to demoing these things, they look so fantastic, magical, and intelligent. But people still don’t seem to grasp that in principle such things can’t work the way we hype them up to.”

Red Full Article: https://www.technologyreview.com/2022/11/18/1063487/meta-large-language-model-ai-only-survived-three-days-gpt-3-science/

No quick fix: How OpenAI’s DALL·E 2 illustrated the challenges of bias in AI

No quick fix: How OpenAI’s DALL·E 2 illustrated the challenges of bias in AI

Photo illustration of warped faces on scanned paper.

An artificial intelligence program that has impressed the internet with its ability to generate original images from user prompts has also sparked concerns and criticism for what is now a familiar issue with AI: racial and gender bias. 

And while OpenAI, the company behind the program, called DALL·E 2, has sought to address the issues, the efforts have also come under scrutiny for what some technologists have claimed is a superficial way to fix systemic underlying problems with AI systems.

“The common thread is that all of these systems are trying to learn from existing data,” Shah said. “They are superficially and, on the surface, fixing the problem without fixing the underlying issue.”

Read More: https://www.nbcnews.com/tech/tech-news/no-quick-fix-openais-dalle-2-illustrated-challenges-bias-ai-rcna39918?fbclid=IwAR3YkLgofPftAuQ0A-qKpRL1AOgXQkJqBtyMgk3iYNYkWNvMpaEzJ1obW1c

How AI Will Change IT Jobs?“

How AI Will Change IT Jobs?“

Ai vs. Humans

AI is not going to take away our jobs, but it’s going to change the landscape of opportunities,” – Dr. Chirag Shah

Read what our lab director has to say about AI taking over our jobs and if we should be concerned with technology making our jobs less meaningful and relevant. https://www.informationweek.com/team-building-and-staffing/how-ai-will-change-it-jobs?fbclid=IwAR2x-_zl0HRX_XnryvJUrVXsgR6E6zzDBR-o7Edg53AIVRj_m0NkH5skAeA