Partner im RedaktionsNetzwerk Deutschland
Radio Logo
Der Stream des Senders startet in null Sek.
Höre Machine Learning Street Talk (MLST) in der App.
Höre Machine Learning Street Talk (MLST) in der App.
(124.878)(171.489)
Sender speichern
Wecker
Sleeptimer
Sender speichern
Wecker
Sleeptimer
StartseitePodcastsTechnologie
Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

Podcast Machine Learning Street Talk (MLST)
Podcast Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)
hinzufügen
Welcome! The team at MLST is inspired by academic research and each week we engage in dynamic discussion with pre-eminent figures in the AI field. Our flagship ... Mehr
Welcome! The team at MLST is inspired by academic research and each week we engage in dynamic discussion with pre-eminent figures in the AI field. Our flagship ... Mehr

Verfügbare Folgen

5 von 119
  • Decoding the Genome: Unraveling the Complexities with AI and Creativity [Prof. Jim Hughes, Oxford]
    Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5 Twitter: https://twitter.com/MLStreetTalk In this eye-opening discussion between Tim Scarfe and Prof. Jim Hughes, a professor of gene regulation at Oxford University, they explore the intersection of creativity, genomics, and artificial intelligence. Prof. Hughes brings his expertise in genomics and insights from his interdisciplinary research group, which includes machine learning experts, mathematicians, and molecular biologists. The conversation begins with an overview of Prof. Hughes' background and the importance of creativity in scientific research. They delve into the challenges of unlocking the secrets of the human genome and how machine learning, specifically convolutional neural networks, can assist in decoding genome function. As they discuss validation and interpretability concerns in machine learning, they acknowledge the need for experimental tests and ponder the complex nature of understanding the basic code of life. They touch upon the fascinating world of morphogenesis and emergence, considering the potential crossovers into AI and their implications for self-repairing systems in medicine. Examining the ethical and regulatory aspects of genomics and AI, the duo explores the implications of having access to someone's genome, the potential to predict traits or diseases, and the role of AI in understanding complex genetic signals. They also consider the challenges of keeping up with the rapidly expanding body of scientific research and the pressures faced by researchers in academia. To wrap up the discussion, Tim and Prof. Hughes shed light on the significance of creativity and diversity in scientific research, emphasizing the need for divergent processes and diverse perspectives to foster innovation and avoid consensus-driven convergence. Filmed at https://www.creativemachine.io/Prof. Jim Hughes: https://www.rdm.ox.ac.uk/people/jim-hughesDr. Tim Scarfe: https://xrai.glass/ Table of Contents: 1. [0:00:00] Introduction and Prof. Jim Hughes' background 2. [0:02:48] Creativity and its role in science 3. [0:07:13] Challenges in understanding the human genome 4. [0:13:20] Using convolutional neural networks to decode genome function 5. [0:15:32] Validation and interpretability concerns in machine learning 6. [0:17:56] Challenges in understanding the basic code of life 7. [0:19:36] Morphogenesis, emergence, and potential crossovers into AI 8. [0:21:38] Ethics and regulation in genomics and AI 9. [0:23:30] The role of AI in understanding and managing genetic risks 10. [0:32:37] Creativity and diversity in scientific research
    31.5.2023
    42:57
  • ROBERT MILES - "There is a good chance this kills everyone"
    Please check out Numerai - our sponsor @ https://numerai.com/mlst Numerai is a groundbreaking platform which is taking the data science world by storm. Tim has been using Numerai to build state-of-the-art models which predict the stock market, all while being a part of an inspiring community of data scientists from around the globe. They host the Numerai Data Science Tournament, where data scientists like us use their financial dataset to predict future stock market performance. Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5 Twitter: https://twitter.com/MLStreetTalk Welcome to an exciting episode featuring an outstanding guest, Robert Miles! Renowned for his extraordinary contributions to understanding AI and its potential impacts on our lives, Robert is an artificial intelligence advocate, researcher, and YouTube sensation. He combines engaging discussions with entertaining content, captivating millions of viewers from around the world. With a strong computer science background, Robert has been actively involved in AI safety projects, focusing on raising awareness about potential risks and benefits of advanced AI systems. His YouTube channel is celebrated for making AI safety discussions accessible to a diverse audience through breaking down complex topics into easy-to-understand nuggets of knowledge, and you might also recognise him from his appearances on Computerphile. In this episode, join us as we dive deep into Robert's journey in the world of AI, exploring his insights on AI alignment, superintelligence, and the role of AI shaping our society and future. We'll discuss topics such as the limits of AI capabilities and physics, AI progress and timelines, human-machine hybrid intelligence, AI in conflict and cooperation with humans, and the convergence of AI communities. Robert Miles: @RobertMilesAI https://twitter.com/robertskmiles https://aisafety.info/ YT version: https://www.youtube.com/watch?v=kMLKbhY0ji0 Panel: Dr. Tim Scarfe Dr. Keith Duggar Joint CTOs - https://xrai.glass/ Refs: Are Emergent Abilities of Large Language Models a Mirage? (Rylan Schaeffer) https://arxiv.org/abs/2304.15004 TOC: Intro [00:00:00] Numerai Sponsor Messsage [00:02:17] AI Alignment [00:04:27] Limits of AI Capabilities and Physics [00:18:00] AI Progress and Timelines [00:23:52] AI Arms Race and Innovation [00:31:11] Human-Machine Hybrid Intelligence [00:38:30] Understanding and Defining Intelligence [00:42:48] AI in Conflict and Cooperation with Humans [00:50:13] Interpretability and Mind Reading in AI [01:03:46] Mechanistic Interpretability and Deconfusion Research [01:05:53] Understanding the core concepts of AI [01:07:40] Moon landing analogy and AI alignment [01:09:42] Cognitive horizon and limits of human intelligence [01:11:42] Funding and focus on AI alignment [01:16:18] Regulating AI technology and potential risks [01:19:17] Aligning AI with human values and its dynamic nature [01:27:04] Cooperation and Allyship [01:29:33] Orthogonality Thesis and Goal Preservation [01:33:15] Anthropomorphic Language and Intelligent Agents [01:35:31] Maintaining Variety and Open-ended Existence [01:36:27] Emergent Abilities of Large Language Models [01:39:22] Convergence vs Emergence [01:44:04] Criticism of X-risk and Alignment Communities [01:49:40] Fusion of AI communities and addressing biases [01:52:51] AI systems integration into society and understanding them [01:53:29] Changing opinions on AI topics and learning from past videos [01:54:23] Utility functions and von Neumann-Morgenstern theorems [01:54:47] AI Safety FAQ project [01:58:06] Building a conversation agent using AI safety dataset [02:00:36]
    21.5.2023
    2:01:54
  • AI Senate Hearing - Executive Summary (Sam Altman, Gary Marcus)
    Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5 Twitter: https://twitter.com/MLStreetTalk In a historic and candid Senate hearing, OpenAI CEO Sam Altman, Professor Gary Marcus, and IBM's Christina Montgomery discussed the regulatory landscape of AI in the US. The discussion was particularly interesting due to its timing, as it followed the recent release of the EU's proposed AI Act, which could potentially ban American companies like OpenAI and Google from providing API access to generative AI models and impose massive fines for non-compliance. The speakers openly addressed potential risks of AI technology and emphasized the need for precision regulation. This was a unique approach, as historically, US companies have tried their hardest to avoid regulation. The hearing not only showcased the willingness of industry leaders to engage in discussions on regulation but also demonstrated the need for a balanced approach to avoid stifling innovation. The EU AI Act, scheduled to come into power in 2026, is still just a proposal, but it has already raised concerns about its impact on the American tech ecosystem and potential conflicts between US and EU laws. With extraterritorial jurisdiction and provisions targeting open-source developers and software distributors like GitHub, the Act could create more problems than it solves by encouraging unsafe AI practices and limiting access to advanced AI technologies. One core issue with the Act is the designation of foundation models in the highest risk category, primarily due to their open-ended nature. A significant risk theme revolves around users creating harmful content and determining who should be held accountable – the users or the platforms. The Senate hearing served as an essential platform to discuss these pressing concerns and work towards a regulatory framework that promotes both safety and innovation in AI. 00:00 Show 01:35 Legals 03:44 Intro 10:33 Altman intro 14:16 Christina Montgomery 18:20 Gary Marcus 23:15 Jobs 26:01 Scorecards 28:08 Harmful content 29:47 Startups 31:35 What meets the definition of harmful? 32:08 Moratorium 36:11 Social Media 46:17 Gary's take on BingGPT and pivot into policy 48:05 Democratisation
    16.5.2023
    49:43
  • Future of Generative AI [David Foster]
    Generative Deep Learning, 2nd Edition [David Foster] https://www.oreilly.com/library/view/generative-deep-learning/9781098134174/ Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5 Twitter: https://twitter.com/MLStreetTalk In this conversation, Tim Scarfe and David Foster, the author of 'Generative Deep Learning,' dive deep into the world of generative AI, discussing topics ranging from model families and auto regressive models to the democratization of AI technology and its potential impact on various industries. They explore the connection between language and true intelligence, as well as the limitations of GPT and other large language models. The discussion also covers the importance of task-independent world models, the concept of active inference, and the potential of combining these ideas with transformer and GPT-style models. Ethics and regulation in AI development are also discussed, including the need for transparency in data used to train AI models and the responsibility of developers to ensure their creations are not destructive. The conversation touches on the challenges posed by AI-generated content on copyright laws and the diminishing role of effort and skill in copyright due to generative models. The impact of AI on education and creativity is another key area of discussion, with Tim and David exploring the potential benefits and drawbacks of using AI in the classroom, the need for a balance between traditional learning methods and AI-assisted learning, and the importance of teaching students to use AI tools critically and responsibly. Generative AI in music is also explored, with David and Tim discussing the potential for AI-generated music to change the way we create and consume art, as well as the challenges in training AI models to generate music that captures human emotions and experiences. Throughout the conversation, Tim and David touch on the potential risks and consequences of AI becoming too powerful, the importance of maintaining control over the technology, and the possibility of government intervention and regulation. The discussion concludes with a thought experiment about AI predicting human actions and creating transient capabilities that could lead to doom. TOC: Introducing Generative Deep Learning [00:00:00] Model Families in Generative Modeling [00:02:25] Auto Regressive Models and Recurrence [00:06:26] Language and True Intelligence [00:15:07] Language, Reality, and World Models [00:19:10] AI, Human Experience, and Understanding [00:23:09] GPTs Limitations and World Modeling [00:27:52] Task-Independent Modeling and Cybernetic Loop [00:33:55] Collective Intelligence and Emergence [00:36:01] Active Inference vs. Reinforcement Learning [00:38:02] Combining Active Inference with Transformers [00:41:55] Decentralized AI and Collective Intelligence [00:47:46] Regulation and Ethics in AI Development [00:53:59] AI-Generated Content and Copyright Laws [00:57:06] Effort, Skill, and AI Models in Copyright [00:57:59] AI Alignment and Scale of AI Models [00:59:51] Democratization of AI: GPT-3 and GPT-4 [01:03:20] Context Window Size and Vector Databases [01:10:31] Attention Mechanisms and Hierarchies [01:15:04] Benefits and Limitations of Language Models [01:16:04] AI in Education: Risks and Benefits [01:19:41] AI Tools and Critical Thinking in the Classroom [01:29:26] Impact of Language Models on Assessment and Creativity [01:35:09] Generative AI in Music and Creative Arts [01:47:55] Challenges and Opportunities in Generative Music [01:52:11] AI-Generated Music and Human Emotions [01:54:31] Language Modeling vs. Music Modeling [02:01:58] Democratization of AI and Industry Impact [02:07:38] Recursive Self-Improving Superintelligence [02:12:48] AI Technologies: Positive and Negative Impacts [02:14:44] Runaway AGI and Control Over AI [02:20:35] AI Dangers, Cybercrime, and Ethics [02:23:42]
    11.5.2023
    2:31:36
  • PERPLEXITY AI - The future of search.
    https://www.perplexity.ai/ https://www.perplexity.ai/iphone https://www.perplexity.ai/android Interview with Aravind Srinivas, CEO and Co-Founder of Perplexity AI – Revolutionizing Learning with Conversational Search Engines Dr. Tim Scarfe talks with Dr. Aravind Srinivas, CEO and Co-Founder of Perplexity AI, about his journey from studying AI and reinforcement learning at UC Berkeley to launching Perplexity – a startup that aims to revolutionize learning through the power of conversational search engines. By combining the strengths of large language models like GPT-* with search engines, Perplexity provides users with direct answers to their questions in a decluttered user interface, making the learning process not only more efficient but also enjoyable. Aravind shares his insights on how advertising can be made more relevant and less intrusive with the help of large language models, emphasizing the importance of transparency in relevance ranking to improve user experience. He also discusses the challenge of balancing the interests of users and advertisers for long-term success. The interview delves into the challenges of maintaining truthfulness and balancing opinions and facts in a world where algorithmic truth is difficult to achieve. Aravind believes that opinionated models can be useful as long as they don't spread misinformation and are transparent about being opinions. He also emphasizes the importance of allowing users to correct or update information, making the platform more adaptable and dynamic. Lastly, Aravind shares his thoughts on embracing a digital society with large language models, stressing the need for frequent and iterative deployments of these models to reduce fear of AI and misinformation. He envisions a future where using AI tools effectively requires clear thinking and first-principle reasoning, ultimately benefiting society as a whole. Education and transparency are crucial to counter potential misuse of AI for political or malicious purposes. YT version: https://youtu.be/_vMOWw3uYvk Aravind Srinivas: https://www.linkedin.com/in/aravind-srinivas-16051987/ https://scholar.google.com/citations?user=GhrKC1gAAAAJ&hl=en https://twitter.com/aravsrinivas?lang=en Interviewer: Dr. Tim Scarfe (CTO XRAI Glass) Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/ESrGqhf5CB TOC: Introduction and Background of Perplexity AI [00:00:00] The Importance of a Decluttered UI and User Experience [00:04:19] Advertising in Search Engines and Potential Improvements [00:09:02] Challenges and Opportunities in this new Search Modality [00:18:17] Benefits of Perplexity and Personalized Learning [00:21:27] Objective Truth and Personalized Wikipedia [00:26:34] Opinions and Truth in Answer Engines [00:30:53] Embracing the Digital Society with Language Models [00:37:30] Impact on Jobs and Future of Learning [00:40:13] Educating users on when perplexity works and doesn't work [00:43:13] Improving user experience and the possibilities of voice-to-voice interaction [00:45:04] The future of language models and auto-regressive models [00:49:51] Performance of GPT-4 and potential improvements [00:52:31] Building the ultimate research and knowledge assistant [00:55:33] Revolutionizing note-taking and personal knowledge stores [00:58:16] References: Evaluating Verifiability in Generative Search Engines (Nelson F. Liu et al, Stanford University) https://arxiv.org/pdf/2304.09848.pdf Note: this was a sponsored interview.
    8.5.2023
    59:48

Weitere Technologie Podcasts

Über Machine Learning Street Talk (MLST)

Welcome! The team at MLST is inspired by academic research and each week we engage in dynamic discussion with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field without succumbing to hype. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/)
Podcast-Website

Hören Sie Machine Learning Street Talk (MLST), Digital Podcast und viele andere Radiosender aus aller Welt mit der radio.de-App

Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

Jetzt kostenlos herunterladen und einfach Radio hören.

Google Play StoreApp Store