• Human Centered AI: Ep. 009 | What's our AI Iwakura moment?
    Dec 4 2025

    Send us a text

    Human Centered AI: Ep.009 - The Iwakura Principle

    See comments on where to listen 🎧

    In 1871, Japan sent half its government overseas. For two years.

    This isn't ancient history. It's a blueprint for AI transformation.

    The Iwakura Mission included sitting cabinet ministers, future prime ministers, and a six-year-old girl who would later appear on the 5,000-yen note.

    They didn't send junior staff to "figure it out." They sent the decision-makers.

    The result? Mitsubishi. Mitsui. The Tokyo Stock Exchange. Nearly 500 companies founded by one mission member alone.

    Most organizations today respond to AI with a pilot program and a steering committee.

    The Japanese have two words for "leaving something to someone":

    → 任せる (makaseru) - entrusting

    → 放置する (hōchi suru) - abandoning

    One built modern Japan. The other builds slide decks nobody reads.


    Three Critical Insights:

    → Send Decision-Makers, Not Researchers - The people who will implement change need to do the learning. Waiting for a summary is abdication, not delegation.

    → Give It Real Time - The mission lasted nearly two years. Most AI initiatives get a quarter to show ROI. That's not transformation—that's a pilot.

    → Study Systems, Not Just Technology - They visited factories, yes. But also schools, courts, prisons, slums. They learned what NOT to copy as much as what to adopt.


    Four Implementation Principles:

    → Literacy Before Strategy. If leadership hasn't personally used the tools, they're not ready to set direction.

    → Document With Intention. The mission produced a 5-volume, 2,000-page report. Most AI pilots end with a deck nobody reads.

    → Filter Through Context. They studied multiple countries, then built something Japanese. "Best practices" from Silicon Valley won't work in Tokyo—or your organization.

    → Build the Next Generation. The young officials on that mission led Japan for 50 years. Who are your future AI leaders? Not the consultants.

    The Iwakura Mission wasn't a project. It was a commitment.

    150 years later, we're still talking about it because it worked.

    In this episode, Nathan Paterson and Brittany Arthur explore what this 150-year-old voyage teaches us about taking AI transformation seriously and why most organizations are confusing delegation with abandonment.


    How you commit matters more than how much you invest.

    Mehr anzeigen Weniger anzeigen
    42 Min.
  • Human-Centered AI: Ep. 008 | Microsoft Partnership, Academy Updates, and Spatial Intelligence
    Nov 25 2025

    Send us a text

    Microsoft Partnership, Academy Updates, and Spatial Intelligence | Human-Centered AI Ep. 008

    Three signals from the AI frontier this week. Each one reshapes how we think about AI readiness.

    SIGNAL 1: Microsoft Partnership
    DTJ is now Microsoft's official training partner for AI education in Japan—working with government officials and policymakers. When governments invest in AI literacy (not just tools), it confirms: this skill is baseline now.

    SIGNAL 2: Academy Confidence
    Our graduates walk into interviews ready when asked "How do you think about AI trade-offs?" They've built conviction, not memorized answers. December and January cohorts now open.

    SIGNAL 3: Spatial Intelligence Is Live
    Dr. Fei-Fei Li's work on AI that understands 3D space just dropped. Take one photo, AI generates a navigable 3D environment. For manufacturing, logistics, robotics—the next wave isn't coming. It's here.

    THREE CRITICAL INSIGHTS:

    1. AI Literacy Moved From Vertical to Horizontal - This isn't specialized anymore. It's baseline. Every role. Every level.

    2. Confidence Is the Competitive Advantage - Technical knowledge is optional. Strategic conviction about AI is not.

    3. The Frontier Keeps Revealing Itself - While most organizations are figuring out ChatGPT, AI just moved from digital to physical.

    We watch the frontier so you don't get blindsided. 37 minutes that translate what's coming into what it means for your work.

    LINKS:
    🔗 Human-Centered AI Leadership Academy: https://www.designthinkingjapan.com/ai-leadership
    🔗 DTJ Website: https://www.designthinkingjapan.com/
    🔗 Microsoft Elevate Japan: https://www.microsoft.com/en-us/elevate
    🔗 World Labs (Spatial Intelligence): https://www.worldlabs.ai/

    ABOUT HUMAN-CENTERED AI PODCAST:
    Weekly insights on AI, leadership, and what's actually happening on the frontier. Hosted by Design Thinking Japan (DTJ), a human-centered AI company in Tokyo.

    We help leaders navigate AI strategy with clarity and confidence—not hype.

    Mehr anzeigen Weniger anzeigen
    36 Min.
  • S3E9: The Third Way - The Intrapreneur's Path with Junichi Yamashita
    Nov 10 2025

    Send us a text

    The Third Way - The Intrapreneur's Path from Inside with Junichi Yamashita

    When people talk about innovation, they present two choices: leave and start fresh, or stay and accept the status quo. But there's a third way—proven in Japanese organizations by someone who's walked this path multiple times.
    Junichi Yamashita built digital products used by millions—Coke ON (65M downloads), multiple Rakuten ventures—all from inside established companies. This isn't theory. It's how innovation actually happens in Japanese organizations, told by someone who's done it repeatedly.

    ✅ What You'll Learn

    How to create momentum when starting with nothing
    The two types of "no" in Japanese business—and why it matters
    Getting beyond inspiration: the actual logistics of innovation
    Creating scenarios that win stakeholder support
    Being different as an advantage in your organization
    DX lessons that matter for AI transformation

    🎯 This Is For You If...

    You have ideas but unclear how to move them forward
    You're told "we need innovation" with no clear path
    You're wondering if you need to leave to create something new
    You're leading transformation but struggling with stakeholder alignment
    You're preparing for AI and want to learn from digital transformation success

    💡 Key Insights
    "Start the ball rolling—once it starts, it doesn't stop easily"
    The hardest part isn't execution—it's creating the first moment of momentum. When asked if something is possible: "I think so... shall I take a look?" This creates the next conversation.
    "You don't have to do something differently—share what people aren't aware of"
    Being different becomes valuable when you find the intersection between what's missing in your environment and what only you can offer.
    "Japanese people are great at doing things right—few can show what the right things to do are"
    Excellence in execution exists. What's missing is identifying the path forward through uncertainty. Once clear, collective power becomes extraordinary.
    "Talk about the value, not the technology"
    Don't explain what something is. Explain what it creates: "Coffee purchases go from 5 to 20 per month." People need to understand why it matters.

    👤 Junichi Yamashita
    Senior Director, Coca-Cola Japan | Coke ON Business Leader
    Led: Coke ON (65M users, 500K+ vending machines), world's first vending machine subscription
    Previously: Rakuten (Ecosystem Strategy, CEO direct report), Korean startup (Country Manager from zero), McKinsey & IBM
    LinkedIn: /junichiyamashita

    🎙️ Host: Brittany Arthur
    Co-founder, Design Thinking Japan
    Helping Japanese organizations innovate since 2012
    designthinkingjapan.com

    🎧 Business Karaoke Podcast
    Authentic conversations with leaders navigating innovation in Japan. Real experience, not just theory.
    Subscribe for more conversations
    Connect: Hello@DesignThinkingJapan.com


    The future isn't built by choosing between leaving or staying. It's built by finding the third way forward.

    Mehr anzeigen Weniger anzeigen
    35 Min.
  • 社内で新規事業を立ち上げる方法 with Junichi Yamashita
    Nov 8 2025
    Send us a text「イノベーションが必要」と言われても、実際どう進めればいいのか。多くの方が同じ悩みを抱えています。今回は、コカ・コーラでCoke ON(6500万DL)を育て、楽天で複数の新規事業を立ち上げてきた山下純一さんをお迎えし、社内で新しいことを始める「現実的な方法」について語っていただきました。理論ではなく、実際の現場で使える具体的な知恵。成功も失敗も含めた、率直な対話です。✅ この対談から得られることこの47分間の対談では、以下のような実践的な知見をお届けします:・新規事業を始める際の、最初の具体的なステップ ・社内で味方(スポンサー)を見つけ、巻き込んでいく方法 ・経営層の理解を得るための「30%ルール」の活用法 ・デジタル専門家でなくても、DXをリードできる理由 ・本業を続けながら、新しい挑戦を持続可能にする工夫 ・外部パートナーやコンサルタントとの効果的な協働の仕方すべて、山下さんご自身の経験に基づいたお話です。🎯 こんな方におすすめですこの対談は、特に以下のような状況にある方にお役立ていただけると思います:・組織の中で新しい取り組みを始めたいが、進め方に迷っている ・イノベーションプロジェクトを任されたが、なかなか前に進まない ・DXを推進する立場だが、周囲の理解や協力を得るのに苦労している ・起業するか、今の会社で挑戦を続けるか、選択肢を考えている ・アイデアや構想はあるが、承認プロセスや予算確保に課題がある ・伝統的な企業文化の中で、変革を起こしたいと考えているリーダーもちろん、上記以外の方にも、組織内でのイノベーションに関心がある方であれば、何かしらのヒントを持ち帰っていただけるのではないかと思います。📌 対談で触れているトピック・イントラプレナー vs アントレプレナー:それぞれのメリット ・ゼロからの始め方:2-3人から始める理由 ・30%プロトタイプの威力と使い方 ・ステアリングコミッティーの作り方 ・スピード vs マラソン思考のバランス ・デジタルリテラシーの壁を越えるコミュニケーション ・DXの成功パターンと失敗パターン ・パッションと個人的な興味の重要性 ・外部コンサルタントとの効果的な協働 ・Coke ON 開発の舞台裏💡 対談の中で印象的だった考え方山下さんが実践してきた中で、特に印象的だったいくつかのアプローチをご紹介します:「2-3人からスタートする」 最初から大人数を集めても、意見の集約に時間がかかってしまう。まずは本当に信じてくれる2-3人と始めて、徐々に広げていく方が結果的に早い。「30%の完成度で見せる」 完璧を目指して時間をかけるより、30%程度の完成度でも具体的な形にして見せる。会議で話すだけでは忘れられてしまうが、何か「モノ」があれば、人は反応してくれる。「社内の関係者こそ、最初のお客様」 最終的なエンドユーザーの前に、まず社内のステークホルダーに価値を理解してもらう必要がある。彼らが協力してくれなければ、プロジェクトは前に進まない。「How(どうやるか)ではなく、Why(なぜ)から」 DXは技術論から入ると失敗しやすい。なぜこれが必要なのか、どんな体験を実現したいのか、目的から考えることが大切。「お母さんに説明できるか」 デジタルの専門用語を使わず、誰にでも分かる言葉で説明できるか。これが、多様なステークホルダーを巻き込む上での鍵になる。これらは、山下さんが実際の経験を通じて体得されてきたことです。👤 ゲスト紹介:山下純一日本コカ・コーラ株式会社 シニアディレクター現在 ・Coke ON(自販機向けロイヤリティプログラム)事業責任者 ・6500万ダウンロード、50万台以上の自販機と連携 ・コカ・コーラ社で世界初の「自販機サブスク」Coke ON Passを2021年にローンチ経歴 ・楽天:エコシステム戦略、メンバーシップ戦略を担当(CEO直下) ・楽天市場:スマホ事業、ROOM(ショッピングSNS)責任者 ・韓国スタートアップ:日本カントリーマネージャー(n=1から立ち上げ) ・マッキンゼー、IBM:戦略・ITコンサルタントLinkedIn: https://www.linkedin.com/in/junichiyamashita💬 皆さまのご意見をお聞かせください...
    Mehr anzeigen Weniger anzeigen
    49 Min.
  • S3E7: From Tech to Trust with Daryl Osuch
    Oct 22 2025

    Send us a text

    The Legal Bridge: Technology to Trust | S3E8


    Guest: Darryl Osuch - Unit Manager, Legal Operations at JERA Co., Inc. | Host of The Legal Ops Podcast

    Episode Length: 51 minutes

    Episode Description

    "I feel like I'm fighting an education battle."

    Darryl Osuch identifies what many organizations are missing about AI adoption. Not a technology battle. Not a process battle. An education battle.

    In this conversation, Darryl shares what he's learning at the intersection of legal operations, AI implementation, and organizational trust. His perspective—both mechanic and driver of AI systems—reveals why the gap between capability and comprehension might be the real bottleneck.

    Microsoft's research shows 70% of AI transformation involves people, 20% workflows, and only 10% algorithms. Yet many organizations find their resource allocation tells a different story. Darryl brings rare expertise: implementing generative AI at JERA while building frameworks that help people actually trust and adopt it.


    Key Themes

    The Translation Gap
    Legal teams are discovering they're not gatekeepers—they're translators between technical capability and human comprehension. When technical concepts get explained but not understood, that's where adoption stalls.

    Trust as Architecture
    Trust operates in layers: data, algorithm, company. When one layer doesn't hold, the entire stack can struggle—regardless of technical capability.

    The Education Battle
    The real challenge isn't teaching people to use AI tools. It's making complexity accessible without losing truth. Translation capability is becoming strategic, not supplementary.

    Democratization with Guardrails
    "Vibe coding" enables people who've never coded to build solutions. The question becomes: How do you create frameworks that enable exploration while maintaining standards?

    The Soft Skills Advantage
    When everyone has access to similar AI tools, what creates distinction? Humanity, authenticity, judgment, empathy, wisdom—the entirely human elements.


    Key Insights from Darryl

    💭 "I feel like I'm fighting an education battle. That's literacy. It's not technical, it's not procedural."

    💭 "If a company does it right and allows democratization with simple guardrails, users have more autonomy, feel more in control, and stay connected to the process."

    💭 "I think one of the necessary roles and powerful functions of a lawyer is to be some kind of translator."

    💭 "The technology almost always outpaces regulation."

    💭 "People will start actively putting humanity and authenticity first when they are looking for something."


    The Reframing Question

    "What is the impact of the work you're doing right now, and how can you improve or magnify that impact?"

    Not "should we use AI?" but "what am I trying to accomplish, and could AI help me accomplish it better?"

    Purpose first. Tool second.


    About Darryl Osuch

    Darryl Osuch solves problems most organizations don't see yet. As Unit Manager of Legal Operations at JERA Co., Inc. in Tokyo, he automates workflows, implements generative AI, and helps legal teams understand what their technology actually does.

    Host of The Legal Ops Podcast and fluent in both law and code, Darryl's philosophy is simple: be both mechanic and driver. Know how it works, not just that it works.

    Connect with Darryl:

    • The Legal Ops Podcast on Spotify
    • LinkedIn: linkedin.com/in/daryl-osuch


    Mehr anzeigen Weniger anzeigen
    51 Min.
  • Human Centered AI: Ep.007 - Validation Architecture, Not Validation Effort
    Oct 14 2025

    Send us a text

    Validation Architecture, Not Validation Effort | Human Centered AI Ep 007

    Deloitte Australia delivered a $440,000 AI-assisted report. The client discovered fake citations, non-existent authors, and books that were never written.

    This isn't about criticizing Deloitte - they're tackling what we're all facing. How do you validate AI output without destroying the speed advantage?

    The Speed Paradox
    AI generates a 100-page report in 3 hours. Human validation takes 2 weeks.
    You can't slow back to human speed (defeats the purpose). You can't trust blindly (Deloitte proved that costs $440,000).

    So what's the answer?

    In This Episode:
    → What actually broke at Deloitte (and why it's a process problem, not a technology problem)
    → Why LLMs are eloquence engines, not truth engines
    → The validation architecture we use for AI-assisted reports
    → How to build checkpoints that preserve speed advantage
    → Why transparency about AI use becomes competitive advantage
    → Managing AI agents vs. managing humans (completely different principles)
    → Four implementation guidelines you can use immediately

    Key Insights:
    The validation bottleneck is real. If you're reading every word, you're back to human speed with added risk.
    Transparency must come first. The AI conversation happens before the project, not after someone finds hallucinations.
    Speed without checkpoints is just risk. Build validation milestones throughout creation, not just at the end.

    Our Approach:
    - Declare sources first (set boundaries or you'll get books that don't exist)
    - Cross-validate patterns, not sentences
    - Build checkpoints throughout (like data packets - check key milestones, not every byte)
    - Human expertise where it matters (evaluate output quality, not proofread words)

    Three Questions for Your Practice:
    - What's your validation framework that doesn't require reading every word?
    - Have you told clients HOW you use AI before they discover it themselves?
    - Are you validating during creation or only after?

    How you validate matters more than how much you validate.

    Deloitte paid $440,000 for this lesson publicly. Learn it here for free.

    RESOURCES:
    📊 AI Future Signals 2025 Report (with full methodology): https://www.designthinkingjapan.com/#futuresignals

    Mehr anzeigen Weniger anzeigen
    31 Min.
  • Human Centered AI: We tested the "AI Conbini" (Real x Tech Lawson) at Takenawa Gateway
    Sep 25 2025

    Send us a text

    "This is the next generation AI-powered convenience store that will become the standard."

    When Lawson and KDDI made this bold claim about their Real Tech store in Tokyo, we had to see it ourselves. As AI implementation practitioners, we learn as much from ambitious attempts as we do from polished successes.

    The press releases promised, 14 AI cameras for personalized recommendations, intelligent avatars, robot food prep, adaptive shopping experiences.

    Unfortunately, what we found was No camera disclosure, "AI avatar" was a human on video call, branded Roomba, staff manually counting inventory beside computer vision equipment, non-interactive screens, zero personalization.

    We spoke English to the "smart avatar." It replied, "A little." That's when we realized we were talking to a person, not AI.

    This isn't about criticizing Lawson or KDDI, they tackled an impossible challenge. Japanese convenience stores are already efficiency masterpieces.

    The promise gap between AI marketing and AI reality is widening across industries.

    Three Critical Insights

    1. Marketing promises can sabotage good work. Customers felt misled by "AI-powered" experiences that were actually human-powered—even though human solutions might be better.

    2. Integration trumps innovation. 14 cameras don't automatically create personalized experiences. The hard work is connecting cameras to inventory systems, recommendation engines, and displays in ways that actually help customers.

    3. Expectation management matters. When you promise "the future," customers expect something genuinely different from everywhere else.

    The technology exists: facial recognition for greetings, real-time inventory tracking, gaze detection, automated checkout. The challenge isn't capability, rather it's system integration and user experience design.

    Here's Four Implementation Guidelines

    1. Start with specific friction. Not "AI-powered store," but "What customer problems can technology solve? Long lines? Product discovery? Language barriers?"

    2. Test quietly, announce loudly. Build it, validate it works, then tell people. Order matters.

    3. Be honest about automation. Customers handle knowing humans help remotely. They can't handle feeling deceived.

    4. Under-promise, over-deliver. Surprise beats disappointment every time.

    Lawson and KDDI deserve credit for pushing boundaries publicly. Most companies play it safe.

    But their experience reminds us that customer trust comes from honest, valuable experiences and not impressive press releases.

    The future of retail will involve AI. But it'll be shaped by companies solving real customer problems, not showcasing impressive technology.

    Mehr anzeigen Weniger anzeigen
    40 Min.
  • S3E6: Every Business Question is Now a Security Question with Jonathan Baier
    Aug 21 2025

    Send us a text

    Security expert Jonathan Baier joins Brittany Arthur to explore how business leaders can approach AI security strategically. Learn practical frameworks for implementing AI while protecting what matters most to your organization.


    What You'll Learn

    Reframe Security as Strategy

    • How security teams can accelerate AI initiatives rather than slow them down
    • The three types of AI security every executive must understand: of AI, from AI, and with AI
    • Moving from "we need AI" to identifying specific value-creating opportunities

    Bridge the Gap Between Business and Security

    • Practical frameworks for non-technical leaders to have meaningful security conversations
    • Why 70% of AI success depends on people, not algorithms
    • How to ask the right questions when you don't have deep technical knowledge

    Master the Innovation-Protection Balance

    • When to take calculated risks vs. when to proceed more cautiously
    • Real examples of companies navigating AI security decisions
    • Starting with business problems rather than AI solutions


    Key Takeaways

    Security becomes a competitive advantage when technology access is democratized

    Your differentiators are data, process, and people—not the technology itself

    Every business question becomes a security question at scale

    Curiosity and mindful experimentation beat both paralysis and reckless confidence

    Small companies need shared security understanding, not dedicated security officers


    3 Power Quotes for Sound Bites

    1. On AI's Limitations

    "AI gives us what we ask for and not necessarily what we need."


    2. On Taking the Right Approach

    "You actually shouldn't start with AI. You should start with what is your problem that you're trying to solve?"


    3. On Security Team Partnerships

    "I think many security teams want to be helpful, but they've sort of gotten stereotyped as they're going to get in the way. And so they're not used to someone coming and saying, hey, let's work together and figure out how to do this. But I think they're very much excited to do that."


    Connect with Jon: https://www.linkedin.com/in/jonathanbaier/

    Mehr anzeigen Weniger anzeigen
    54 Min.