๐€๐๐๐ซ๐ž๐ฌ๐ฌ๐ข๐ง๐  ๐–๐ž๐š๐ค ๐ƒ๐ž๐œ๐ข๐ฌ๐ข๐จ๐ง ๐๐จ๐ฎ๐ง๐๐š๐ซ๐ข๐ž๐ฌ ๐ข๐ง ๐ˆ๐ฆ๐š๐ ๐ž ๐‚๐ฅ๐š๐ฌ๐ฌ๐ข๐Ÿ๐ข๐œ๐š๐ญ๐ข๐จ๐ง ๐›๐ฒ ๐‹๐ž๐ฏ๐ž๐ซ๐š๐ ๐ข๐ง๐  ๐–๐ž๐› ๐’๐ž๐š๐ซ๐œ๐ก ๐š๐ง๐ ๐†๐ž๐ง๐ž๐ซ๐š๐ญ๐ข๐ฏ๐ž ๐Œ๐จ๐๐ž๐ฅ๐ฌ #IJCAI2023

๐€๐๐๐ซ๐ž๐ฌ๐ฌ๐ข๐ง๐  ๐–๐ž๐š๐ค ๐ƒ๐ž๐œ๐ข๐ฌ๐ข๐จ๐ง ๐๐จ๐ฎ๐ง๐๐š๐ซ๐ข๐ž๐ฌ ๐ข๐ง ๐ˆ๐ฆ๐š๐ ๐ž ๐‚๐ฅ๐š๐ฌ๐ฌ๐ข๐Ÿ๐ข๐œ๐š๐ญ๐ข๐จ๐ง ๐›๐ฒ ๐‹๐ž๐ฏ๐ž๐ซ๐š๐ ๐ข๐ง๐  ๐–๐ž๐› ๐’๐ž๐š๐ซ๐œ๐ก ๐š๐ง๐ ๐†๐ž๐ง๐ž๐ซ๐š๐ญ๐ข๐ฏ๐ž ๐Œ๐จ๐๐ž๐ฅ๐ฌ #IJCAI2023

Preetam Dammu, Yunhe Feng and Chirag Shah 

In an era where machine learning (ML) technologies are becoming more prevalent, the ethical and operational issues surrounding them cannot be ignored. Here’s how we tackled this challenge:

๐Ÿ’ก The Problem:
ML models often don’t perform equally well for underrepresented groups, placing vulnerable populations at a disadvantage.

๐ŸŒ Our Solution:
We leveraged web search and Generative AI to improve the robustness and reduce bias in discriminative ML models.

๐Ÿ” Methodology:
1. We identified weak decision boundaries for classes representing vulnerable populations (e.g., female doctor of color).
2. We constructed search queries for Google and generated text for creating images with DALL-E 2 and Stable Diffusion.
3. We used these new training samples to reduce population bias.

๐Ÿ“ˆ Results:
1. Achieved a significant reduction (77.30%) in the model’s gender accuracy disparity.
2. Enhanced the classifier’s decision boundary, resulting in fewer weak spots and better class separation.

๐ŸŒ Applicability:
Although demonstrated on vulnerable populations, this approach is extendable to a wide range of problems and domains.

https://media.licdn.com/dms/document/media/D561FAQHC2lEiwd7tXg/feedshare-document-pdf-analyzed/0/1692451349751?e=1694044800&v=beta&t=f5YjwyHd_KCSHH8ckfZcOrbyLCIlFG4u-nECZSFTX1o

AI information retrieval: A search engine researcher explains the promise and peril of letting ChatGPT and its cousins search the web for you

AI information retrieval: A search engine researcher explains the promise and peril of letting ChatGPT and its cousins search the web for you

Author: Chirag Shah Professor of Information Science, University of Washington

Big red robot toy teaches his class ,
Charles Taylor/iStock via Getty Images

The prominent model of information access before search engines became the norm โ€“ librarians and subject or search experts providing relevant information โ€“ was interactive, personalized, transparent and authoritative. Search engines are the primary way most people access information today, but entering a few keywords and getting a list of results ranked by some unknown function is not ideal.

A new generation of artificial intelligence-based information access systems, which includes Microsoftโ€™s Bing/ChatGPTGoogle/Bard and Meta/LLaMA, is upending the traditional search engine mode of search input and output. These systems are able to take full sentences and even paragraphs as input and generate personalized natural language responses.

At first glance, this might seem like the best of both worlds: personable and custom answers combined with the breadth and depth of knowledge on the internet. But as a researcher who studies the search and recommendation systems, I believe the picture is mixed at best.

AI systems like ChatGPT and Bard are built on large language models. A language model is a machine-learning technique that uses a large body of available texts, such as Wikipedia and PubMed articles, to learn patterns. In simple terms, these models figure out what word is likely to come next, given a set of words or a phrase. In doing so, they are able toย generate sentences, paragraphs and even pagesย that correspond to a query from a user. On March 14, 2023, OpenAI announced the next generation of the technology, GPT-4, whichย works with both text and image input, and Microsoft announced that itsย conversational Bing is based on GPT-4.

Thanks to the training on large bodies of text, fine-tuning and other machine learning-based methods, this type of information retrieval technique works quite effectively. The large language model-based systems generate personalized responses to fulfill information queries. People have found the results so impressive that ChatGPT reached 100 million users in one third of the time it took TikTok to get to that milestone. People have used it to not only find answers but toย generate diagnoses,ย create dieting plansย andย make investment recommendations.

Read Full Article

On Tech Ethics Podcast – The Impact of ChatGPT on Academic Integrity with Professor Chirag Shah

On Tech Ethics Podcast – The Impact of ChatGPT on Academic Integrity with Professor Chirag Shah

What is ChatGPT and why does it matter? Here's everything you need to know  | ZDNET

โ€œEven then, people are still finding things that are biased, and to some people, they are unable to distinguish this information because the source cannot be validated.โ€ – Chirag Shah

Discusses the impact of AI on academic integrity, with a focus on ChatGPT.

Our guest is Chirag Shah, Ph.D. Chirag is a Professor of Information and Computer Science at the University of Washington. He is the Founding Director of InfoSeeking Lab and Founding Co-Director of the Center for Responsibility in AI Systems & Experiences (RAISE). He works on intelligent information access systems focusing on fairness and transparency.

Listen on Apple Podcast

Listen on Spotify

AI-generated information carries risks along with promise

AI-generated information carries risks along with promise

By Professor Chirag Shah, UW Information School Monday, February 13, 2023

Image generated using DALL-E


Artificial intelligence platforms such as ChatGPT have caught the attention of researchers, students and the public in recent weeks. For this dean’s message, I have invited Chirag Shah, an Information School professor and expert in AI ethics, to share his thoughts on the future of generative AI.

โ€” Anind K. Dey, Information School Dean and Professor

ChatGPT has caused quite an uproar. Itโ€™s an exciting AI chat system that leverages huge amounts of text processing to provide short, natural-sounding responses as well as complete complex tasks. It can write long essays, generate reports, develop insights, create tweets, and provide customized plans for various goals from dieting to retirement planning. 

Amid the excitement about what ChatGPT can do, many have quickly started pointing out issues with its usage. Plagiarism and bias are the most immediate concerns, and there are many long-term challenges about the implications of such technology on educational processes, jobs, and even human knowledge and its dissemination at a global scale. 

We have entered a new era in which systems can not only retrieve the information we want, but generate conversations, code, images, music and even simple videos on their own. This is powerful technology that has the potential to change how the world works with information, and as with any revolutionary technology, its benefits are paired with risk and uncertainty.

Traditionally, we have had two types of systems to access information: direct and algorithmically mediated. When we read newspapers, we are accessing information directly. When we use search engines or browse through recommendations on Netflixโ€™s interface, we are accessing algorithmically mediated information. In both categories, the information already existed. But now we are able to access a third type: algorithmically generated information that didnโ€™t previously exist.

There could be great benefits to having AI create information. For example, what if an author working on a childrenโ€™s book needed an illustration where astronauts are playing basketball with cats in space? Chances are, no system could retrieve it. But if the author makes a query to DALL-E, Imagen, or Stable Diffusion, for example, they will get a pretty good response that is generated rather than retrieved.

Generated information can be customized to our given need and context without our having to sift through sources. However, we have little understanding of how and why the information was provided. We can be excited about an all-knowing AI system that is ready to chat with us 24/7, but we should also be wary about being unable to verify what the system tells us. 

What if I asked you which U.S. presidentโ€™s face is on the $100 bill? If you said โ€œBenjamin Franklin,โ€ then you fell for a trick question. Benjamin Franklin was a lot of things โ€” a Founding Father, scientist, inventor, the first Postmaster General of the United States โ€” but he was never a president. So, youโ€™ve generated an answer that doesnโ€™t or shouldnโ€™t exist. Various pieces of otherwise credible information you know, such as presidents on dollar bills and Benjamin Franklin as a historical figure, gave you a sense of correctness when you were asked a leading question. 

Similarly, algorithmically generated information systems combine sources and context to deliver an answer, but that answer isnโ€™t always valid. Researchers are also concerned that these systems invariably canโ€™t or wonโ€™t provide transparency about their sources, reveal their processes, or account for biases that have long plagued data and models in AI.

Big tech companies and startups are quickly integrating such technologies, and that raises many pressing questions. Will this be the new generation of information access for all? Will we or should we eliminate many of the cognitive tasks and jobs that humans currently do, given that AI systems could do them? How will this impact education and workforce training for the next generation? Who will oversee the development of these technologies? As researchers, itโ€™s our job to help the public and policymakers understand technologyโ€™s implications so that companies are held to a higher standard. We need to help ensure that these technologies benefit everyone and support the values we want to promote as a society.

Oh, and if that Benjamin Franklin question tripped you up, donโ€™t feel bad. ChatGPT gets it wrong too!

Read Article

Congratulations to our lab director Dr. Chirag Shah for this milestone recognition

Congratulations to our lab director Dr. Chirag Shah for this milestone recognition

Chirag Shah

ACM honorsย iSchool’s Shah as Distinguished Member

UW Information School Professor Chirag Shah is among 67 global scholars recognized this year as Distinguished Members of the Association for Computing Machinery (ACM) for their outstanding contributions to the computing field. 

The ACM is the worldโ€™s largest computing society. It recognizes up to 10 percent of its worldwide membership as distinguished members based on their professional experience, groundbreaking achievements and longstanding participation in computing. The ACM has three tiers of recognition: fellows, distinguished members and senior members. The Distinguished Member Recognition Program, which honors members with at least 15 years of professional experience, recognized Shah for his work at the intersection of information, access and responsible AI. Shah expressed his gratitude and appreciation for the award. 

โ€œIโ€™m incredibly grateful for all the support Iโ€™ve received from everyone. Itโ€™s a very humbling experience,โ€ said Shah.ย 

Shah has contributed a great deal of research related to people-centered information access and examining how biases and issues of discrimination that are present within information systems can be counteracted. One of Shahโ€™s significant contributions to the iSchool has been co-foundingย Responsibility in AI Systems and Experiencesย (RAISE).

Read more: https://ischool.uw.edu/news/2022/12/acm-honors-ischools-shah-distinguished-member?fbclid=IwAR10RqqU2ltZZ4P-OFlZUjlc1i62UYy1TgzlRRwJ5cqNzUtGZTVBP1O_TGY

Our Lab Director quoted in: Why Metaโ€™s latest large language model survived only three days online

Our Lab Director quoted in: Why Metaโ€™s latest large language model survived only three days online

space bears floating in space

โ€œI am both astounded and unsurprised by this new effort,โ€ says Chirag Shah at the University of Washington, who studies search technologies. โ€œWhen it comes to demoing these things, they look so fantastic, magical, and intelligent. But people still donโ€™t seem to grasp that in principle such things canโ€™t work the way we hype them up to.โ€

Red Full Article: https://www.technologyreview.com/2022/11/18/1063487/meta-large-language-model-ai-only-survived-three-days-gpt-3-science/

No quick fix: How OpenAI’s DALLยทE 2 illustrated the challenges of bias in AI

No quick fix: How OpenAI’s DALLยทE 2 illustrated the challenges of bias in AI

Photo illustration of warped faces on scanned paper.

An artificial intelligence program that has impressed the internet with its ability to generate original images from user prompts has also sparked concerns and criticism for what is now a familiar issue with AI: racial and gender bias.ย 

And while OpenAI, the company behind the program, called DALLยทE 2, has sought to address the issues, the efforts have also come under scrutiny for what some technologists have claimed is a superficial way to fix systemic underlying problems with AI systems.

โ€œThe common thread is that all of these systems are trying to learn from existing data,โ€ Shah said. โ€œThey are superficially and, on the surface, fixing the problem without fixing the underlying issue.โ€

Read More: https://www.nbcnews.com/tech/tech-news/no-quick-fix-openais-dalle-2-illustrated-challenges-bias-ai-rcna39918?fbclid=IwAR3YkLgofPftAuQ0A-qKpRL1AOgXQkJqBtyMgk3iYNYkWNvMpaEzJ1obW1c

How AI Will Change IT Jobs?โ€œ

How AI Will Change IT Jobs?โ€œ

Ai vs. Humans

AI is not going to take away our jobs, but it’s going to change the landscape of opportunities,โ€ – Dr. Chirag Shah

Read what our lab director has to say about AI taking over our jobs and if we should be concerned with technology making our jobs less meaningful and relevant. https://www.informationweek.com/team-building-and-staffing/how-ai-will-change-it-jobs?fbclid=IwAR2x-_zl0HRX_XnryvJUrVXsgR6E6zzDBR-o7Edg53AIVRj_m0NkH5skAeA

Predictive analytics in marketing: Achieving success

Predictive analytics in marketing: Achieving success

Predictive Analytics in Healthcare: A 4-Step Framework

Marketing and analytics experts said marketers can choose from a number of off-the-shelf predictive analytics tools with machine learning and AI built in. However, Shah explained that the more advanced marketing operations often build their own algorithms and custom tools, seeing it as a way to differentiate their efforts and maximize the success for their own organizations. “It almost also becomes a proprietary thing. For many companies, the way they derive their insights is the ‘secret sauce,'” he said.

Read More: https://searchbusinessanalytics.techtarget.com/…/Predic..