PodcastsGesellschaft und KulturCompromising Positions - A Technology Podcast

Compromising Positions - A Technology Podcast

Compromising Positions
Compromising Positions - A Technology Podcast
Neueste Episode

61 Episoden

  • Compromising Positions - A Technology Podcast

    EPISODE 59: Chernobyl 40th Anniversary: Are Nuclear Power Plants Safe from A Cyber Attack?

    30.04.2026 | 1 Std. 20 Min.
    In this episode, we commemorate the 40th anniversary of the Chernobyl disaster by asking a chilling modern question: Can a cyber attack cause a nuclear meltdown in 2026? Moving past the Hollywood tropes of ‘exploding reactors,’ we dive into the high-stakes world of OT (Operational Technology) security and critical infrastructure protection. We are joined by Oleg Illiashenko, an expert in nuclear cybersecurity, and Bec McKeown, a specialist in human factors and cognitive readiness, to explore the coordinated digital erosion of safety systems and the psychological ‘misfit’ that occurs when human decision-making collapses under pressure.
    This isn’t a history lesson. It’s a deep dive into supply chain vulnerabilities, IT/OT convergence, and the uncomfortable truth that in a VUCA (Volatile, Uncertain, Complex, Ambiguous) crisis, the first thing to fail isn't the code, it's the human mind's ability to regulate stress.
    Expect a masterclass in resilience engineering, safety-critical design, and why the battle for the future of nuclear safety is actually a battle for trustworthy data.
    In This Episode, We Discuss:
    The Anatomy of a Nuclear Cyber Attack: Why the most credible threat isn't a single hack, but the coordinated degradation of monitoring systems during a plant transient or grid instability.
    From Chernobyl to Fukushima: How organisational silence, governance failures, and ignored ‘weak signals’ remain the primary human-factor risks in modern nuclear facilities.
    The Action Bias Trap: Why the most effective incident response move is often a ‘purposeful pause,’ and how psychological safety allows experts to override failing procedures.
    IT/OT Convergence & Fragility: How digitalisation and AI diagnostics improve safety while simultaneously expanding the attack surface through complex new failure modes.
    Building Cognitive Readiness: Practical strategies for emotional regulation and ‘micro-resets’ to maintain shared alignment and decision quality during a high-consequence cyber event.
    Show Notes
    A Look at the Leadership Management of Chernobyl and Fukushima Nuclear Accidents by Serap Dunman and Müge Ensari Özay
    LinkedIn for Oleg Illiashenko
    LinkedIn for Bec McKeown
    Get in touch with Bec about contributing to Mind Science
  • Compromising Positions - A Technology Podcast

    Compromising Positions Presents: Tech Film Noir - The Terminator (1984)

    16.04.2026 | 1 Std.
    In the premiere episode of Tech Film Noir, hosts Lianne Potter, Jeff Watkins, and Simon Painter travel back to 1984 to dissect James Cameron’s career-defining masterpiece, The Terminator.
    *** Regular Compromising Positions Resume on 30th April!***
    We’re putting Arnold’s cyborg under the microscope - literally. From the 6502 assembly language hidden in the Terminator’s HUD to the ‘Right to Repair’ scene that would make a modern technician weep, we explore why this low-budget slasher-turned-sci-fi remains the gold standard for AI storytelling. We also tackle the tough questions: Why does time travel require nudity (and will it encourage us to be ‘beach ready’ in the future)? And can we please acknowledge that Kyle Reese saved humanity while wearing deeply questionable, possibly biohazard-level trousers?
    Whether you're here for the technical deep dive into Agentic AI or the high-octane roast of Terminator: Genisys, this episode has enough 80s nostalgia to power a Walkman for a decade.
    Stick around to the end for our completely serious (not serious) food and drink pairings.
    When movies guess the future, we check their work.

    Subscribe here:
    Youtube: https://www.youtube.com/@TechFilmNoir
    Spotify: https://open.spotify.com/show/7tb4fGTPsLO8ZxOeJMZV8U?si=2ddc9cd5153a44bc
    Apple Podcasts: https://podcasts.apple.com/gb/podcast/tech-film-noir-a-technology-and-film-podcast/id1892857131
  • Compromising Positions - A Technology Podcast

    EPISODE 58: Self-Driving Cars, Cybersecurity & Trust

    26.03.2026 | 50 Min.
    What happens when the welfare state designs its technology to side-eye first and ask questions later?
    In this episode, we take a ride into the world of self-driving cars and ask: What happens to trust when your car gets hacked?
    Drawing upon a 2025 autonomous car-hacking experiment, we explore how trust is built, broken, and crucially, whether that trust can be repaired once a system puts you in harms way.
    This isn’t just about cars. It’s about what happens when we hand over control to a system we don’t fully understand.
    Expect human factors, socio-technical theory, real-world cyber scenarios, and the uncomfortable reality that fixing the system isn’t the same as fixing trust.
    In This Episode, We Discuss:
    The Attack Surface is Trust: Why the real vulnerability in autonomous systems isn’t the code, it’s human belief.
    Hack vs Bug: Why a malicious attack hits differently than a system error (and why that distinction matters).
    Transparency After a Breach: Does telling people the truth about a cyber attack actually rebuild trust or just make them more nervous?
    The Social Truth about Trust: Why you’re not just trusting the car, but the company, the regulators and the entire system behind it.
    LINKS
    The Impact of Cybersecurity Attacks on Human Trust in Autonomous Vehicle Operations by Cherin Lim, David Predez, Linda Ng Boyle and Prashanth Rajivan (2025)
    Foundations for an Empirically Determined Scale of Trust in Automated Systems by Jiun-Yin Jian, Ann Bisantz, Colin Drury, and James Llinas (1998)
    Test your morals with the Moral Machine game.
  • Compromising Positions - A Technology Podcast

    EPISODE 57: SUSPICION BY DESIGN: INSIDE DWP’S UNIVERSAL CREDIT AI FRAUD SYSTEM

    26.02.2026 | 45 Min.
    What happens when the welfare state designs its technology to side-eye first and ask questions later?
    In this episode of Compromising Positions, we get hands-on with Big Brother Watch’s “Suspicion by Design” report, unpacking how the UK Department for Work and Pensions (DWP) uses algorithmic profiling and AI systems to detect Universal Credit fraud and why defaulting to suspicion is a dangerous position for any government to take.
    This episode is a measured examination of welfare AI, algorithmic decision-making, and what happens to trust, consent, and dignity when systems are built to watch first and explain never.
    Expect socio-technical theory, legal realities, real-world harms, and the kind of uncomfortable questions policymakers really don’t like being asked.
    In This Episode, We Discuss:
    Suspicion Architecture: What happens when suspicion is a design choice.
    The Algorithmic Gaze meets Dataveillance: What happens when you can’t opt out of AI lead services that are inherently bias against you.
    Why “Security Through Obscurity” Fails: We show why secrecy doesn’t equal safety.
    Fraud Detection that Punishes the Many, not the Few: How to design AI systems that protect public funds without criminalising the people who need it most.
    Show Notes
    Suspicion by Design: What we know about the DWP’s algorithmic black box, and what it tries to hide by Big Brother Watch (2025)
    Surveillance as Social Sorting: Privacy, Risk and Digital Discrimination by David Lyon (Ed) (2003)
    Information Technology and Dataveillance by Roger Clarke (1988; 3015)
  • Compromising Positions - A Technology Podcast

    EPISODE 56: From Dark Triads to Patriotic Hackers: Human Maliciousness in Cybersecurity

    29.01.2026 | 45 Min.
    Is cybersecurity just a technical problem, or a human one?
    In this episode, we debut our new format: bridging the gap between deep academic research and boots-on-the-ground security practice. We dive into Zoe M. King et al., 2018 paper, "Characterising and Measuring Maliciousness for Cybersecurity Risk Assessment," to uncover why we need to stop looking at code and start looking at intent.
    From the "Dark Triad" of personality traits to the rise of the "patriotic hacker" in global geopolitics, we peel back the layers of the human onion to understand what actually drives a person to cause harm.
    In This Episode, We Discuss:
    The Maliciousness Assessment Metric (MAM): Why traditional risk assessments fail by ignoring "intent to harm" and how to integrate human factors into your security posture.
    The Four Layers of Maliciousness: A deep dive into the Individual, Micro, Meso, and Macro levels—from personal psychology to national narratives.
    Hacking as Patriotism: How cultural contexts in the US, Russia, and China dictate whether a hacker is seen as a criminal or a hero.
    The "War Games" Effect: How 80s cinema shaped US cybersecurity legislation (CFAA) and continues to influence public perception.
    Insider Threats & Organizational Hygiene: Why disgruntlement is a security vulnerability and how the "Principle of Least Privilege" is your best defense.
    Risk as a Moral Construct: Why the risks your company chooses to mitigate reveal your organisation's true values and concept of justice.
    Show Notes
    Characterizing and Measuring Maliciousness for Cybersecurity Risk Assessment by Zoe M. King et al., featured in the journal Frontiers in Psychology (2018)
    Risk and Blame: Essays in Cultural Theory by Mary Douglas
    Risk and Culture: An Essay on the Selection of Technological and Environmental Dangers by Mary Douglas and Aaron Wildavsky
Weitere Gesellschaft und Kultur Podcasts
Über Compromising Positions - A Technology Podcast
The award-winning tech podcast that asks : "Are we the ones breaking the world?" Most tech podcasts are an echo chamber for builders. We step outside. We talk to the observers, the social scientists, and the deep thinkers who study the friction we create and the human systems we disrupt. Lianne Potter and Jeff Watkins strip away the industry fluff and pit academic research against the harsh reality of real organisations and real human incentives. We don’t just talk about AI, security, and automation; we explore the unintended consequences of our own "elegant" solutions. We’re here to look at tech through a different lens and ask the uncomfortable questions that the industry usually avoids. Because if you’ve built a system that has become everyone else's problem, you have to ask: "Am I the compromising position here?"
Podcast-Website

Höre Compromising Positions - A Technology Podcast, too many tabs – der Podcast und viele andere Podcasts aus aller Welt mit der radio.de-App

Hol dir die kostenlose radio.de App

  • Sender und Podcasts favorisieren
  • Streamen via Wifi oder Bluetooth
  • Unterstützt Carplay & Android Auto
  • viele weitere App Funktionen
Compromising Positions - A Technology Podcast: Zugehörige Podcasts
Rechtliches
Social
v6.9.1| © 2007-2026 radio.de GmbH
Generated: 5/14/2026 - 4:37:57 AM