Human–artificial intelligence collaboration

From WikiProjectMed
Jump to navigation Jump to search

Human-AI collaboration is the study of how humans and artificial intelligence (AI) agents work together to accomplish a shared goal.[1] AI systems can aid humans in everything from decision making tasks to art creation.[2] Examples of collaboration include medical decision making aids.,[3][4] hate speech detection,[5] and music generation.[6] As AI systems are able to tackle more complex tasks, studies are exploring how different models and explanation techniques can improve human-AI collaboration.

Improving collaboration

Explainable AI

When a human uses an AI's output, they often want to understand why a model gave a certain output.[7] While some models, like decision trees, are inherently explainable, black box models do not have clear explanations. Various Explainable artificial intelligence methods aim to describe model outputs with post-hoc explanations[8] or visualizations,[9] these methods can often provide misleading and false explanations.[10] Studies have also found that explanations may not improve the performance of a human-AI team, but simply increase a human's reliance on the model's output.[11]

Trust in AI

A human's trust in an AI agent is an important factor in human-AI collaboration, dictating whether the human should follow or override the AI's input.[12] Various factors impact a person's trust in an AI system, including its accuracy[13] and reliability[14]

Adoption of AI

AI adoption by users is crucial for improving Human-AI collaboration since user’s adoption is not just about using the new technology, but also important in transforming how work is done, how decisions are made, and how projects and organizations operate in a more efficient manner. This transformation is essential for realizing the full potential of Human-AI collaboration. In the evolving digital landscape, there is an increasing pressure to adopt and effectively utilize artificial intelligence (AI), which is steadily entering the management, work, and organizational ecosystems and enabling digital transformations. The successful adoption of AI is a complex and multifaceted process that requires careful consideration of various factors [15]

Why is humanizing AI-Generated text is important?

Here are the reasons why humanizing AI-generated content is important:[16]

  1. Relatability: Human readers seek emotionally resonant content. AI can lack the nuances that make content relatable.
  2. Authenticity: Readers value a genuine human touch behind content, ensuring it doesn't come off as robotic.
  3. Contextual Understanding: AI can misinterpret nuances, requiring human oversight for accuracy.
  4. Ethical Considerations: Humanizing AI content helps identify and rectify biases, ensuring fairness.
  5. Search Engine Performance: AI may not consistently meet search engine guidelines, risking penalties.
  6. Conversion Improvement: Humanized content connects emotionally and crafts tailored calls to action.
  7. Building Trust: Humanized content adds credibility, fostering reader trust.
  8. Cultural Sensitivity: Humanization ensures content is respectful and tailored to diverse audiences.

References

  1. ^ Sturm, Timo; Gerlach, Jin P.; Pumplun, Luisa; Mesbah, Neda; Peters, Felix; Tauchert, Christoph; Nan, Ning; Buxmann, Peter (2021). "Coordinating Human and Machine Learning for Effective Organizational Learning". MIS Quarterly. 45 (3): 1581–1602. doi:10.25300/MISQ/2021/16543. S2CID 238222756.
  2. ^ Mateja, Deborah; Heinzl, Armin (July 2021). "Towards Machine Learning as an Enabler of Computational Creativity". IEEE Transactions on Artificial Intelligence. 2 (6): 460–475. doi:10.1109/TAI.2021.3100456. ISSN 2691-4581. S2CID 238941032.
  3. ^ Yang, Qian; Steinfeld, Aaron; Zimmerman, John (2019-05-02). "Unremarkable AI". Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. CHI '19. Glasgow, Scotland Uk: Association for Computing Machinery. pp. 1–11. arXiv:1904.09612. doi:10.1145/3290605.3300468. ISBN 978-1-4503-5970-2. S2CID 127989976.
  4. ^ Patel, Bhavik N.; Rosenberg, Louis; Willcox, Gregg; Baltaxe, David; Lyons, Mimi; Irvin, Jeremy; Rajpurkar, Pranav; Amrhein, Timothy; Gupta, Rajan; Halabi, Safwan; Langlotz, Curtis (2019-11-18). "Human–machine partnership with artificial intelligence for chest radiograph diagnosis". npj Digital Medicine. 2 (1): 111. doi:10.1038/s41746-019-0189-7. ISSN 2398-6352. PMC 6861262. PMID 31754637.
  5. ^ "Facebook's AI for Hate Speech Improves. How Much Is Unclear". Wired. ISSN 1059-1028. Retrieved 2021-02-08.
  6. ^ Roberts, Adam; Engel, Jesse; Mann, Yotam; Gillick, Jon; Kayacik, Claire; Nørly, Signe; Dinculescu, Monica; Radebaugh, Carey; Hawthorne, Curtis; Eck, Douglas (2019). "Magenta Studio: Augmenting Creativity with Deep Learning in Ableton Live". Proceedings of the International Workshop on Musical Metacreation (MUME).
  7. ^ Samek, Wojciech; Montavon, Grégoire; Vedaldi, Andrea; Hansen, Lars Kai; Müller, Klaus-Robert (2019-09-10). Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. Springer Nature. ISBN 978-3-030-28954-6.
  8. ^ Ribeiro, Marco Tulio; Singh, Sameer; Guestrin, Carlos (2016-08-13). ""Why Should I Trust You?"". Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. KDD '16. San Francisco, California, USA: Association for Computing Machinery. pp. 1135–1144. doi:10.1145/2939672.2939778. ISBN 978-1-4503-4232-2. S2CID 13029170.
  9. ^ Selvaraju, R. R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. (October 2017). "Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization". 2017 IEEE International Conference on Computer Vision (ICCV). pp. 618–626. arXiv:1610.02391. doi:10.1109/ICCV.2017.74. ISBN 978-1-5386-1032-9. S2CID 206771654.
  10. ^ Adebayo, Julius; Gilmer, Justin; Muelly, Michael; Goodfellow, Ian; Hardt, Moritz; Kim, Been (2018-12-03). "Sanity checks for saliency maps". Proceedings of the 32nd International Conference on Neural Information Processing Systems. NIPS'18. Montréal, Canada: Curran Associates Inc.: 9525–9536. arXiv:1810.03292.
  11. ^ Bansal, Gagan; Wu, Tongshuang; Zhou, Joyce; Fok, Raymond; Nushi, Besmira; Kamar, Ece; Ribeiro, Marco Tulio; Weld, Daniel S. (2021-01-12). "Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance". arXiv:2006.14779 [cs.AI].
  12. ^ Glikson, Ella; Woolley, Anita Williams (2020-03-26). "Human Trust in Artificial Intelligence: Review of Empirical Research". Academy of Management Annals. 14 (2): 627–660. doi:10.5465/annals.2018.0057. ISSN 1941-6520. S2CID 216198731.
  13. ^ Yin, Ming; Wortman Vaughan, Jennifer; Wallach, Hanna (2019-05-02). "Understanding the Effect of Accuracy on Trust in Machine Learning Models". Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. CHI '19. Glasgow, Scotland Uk: Association for Computing Machinery. pp. 1–12. doi:10.1145/3290605.3300509. ISBN 978-1-4503-5970-2. S2CID 109927933.
  14. ^ Bansal, Gagan; Nushi, Besmira; Kamar, Ece; Lasecki, Walter S.; Weld, Daniel S.; Horvitz, Eric (2019-10-28). "Beyond Accuracy: The Role of Mental Models in Human-AI Team Performance". Proceedings of the AAAI Conference on Human Computation and Crowdsourcing. 7 (1): 2–11. doi:10.1609/hcomp.v7i1.5285. S2CID 201685074.
  15. ^ A. Tursunbayeva, H. Chalutz-Ben Gal (2024). "Adoption of artificial intelligence: A TOP framework-based checklist for digital leaders ‏" (PDF). Business Horizons, In Press, 2024.{{cite web}}: CS1 maint: numeric names: authors list (link)
  16. ^ "Humanize AI Text". www.humanizeaitext.org. Retrieved 2023-10-19.