Story-first, evidence always

BlinkBeam began as a late-night forum for operators who wanted slower threads and faster queries. Today we still behave like a community first: courses fund the moderation bench, not the other way around.

Principles we repeat in onboarding

  • Teach measurement before opinion.
  • Prefer reproducible notebooks over one-off hero saves.
  • Keep vendor-neutral language so teams can compare engines honestly.
  • Protect employer confidentiality—labs stay synthetic unless you opt in.

Team

Hana Sato

Lead SQL Performance Instructor

Execution plans & diagnostics

Former platform engineer who still ships patches to open-source query engines on weekends.

Min-jun Park

Database Curriculum Designer

Indexing & storage layout

Builds progressive labs that mirror messy production traces, not toy datasets.

Elena Varga

Enterprise Training Consultant

Team rituals & office hours

Runs cohort kickoffs and keeps enterprise workshops aligned with internal release cadences.

Theo Okonkwo

Query Optimization Coach

Benchmarking corner

Specializes in turning “it worked in staging” threads into reproducible benchmarks.

Rina Cho

Customer Success Manager

Welcome desk

Pairs new members with the right study groups and keeps onboarding friction low.

Marcus Bell

Technical Content Strategist

Deep dives library

Maintains long-form guides on wait events, latch contention, and plan shape regressions.

Yuki Taneda

Operations Coordinator

Events & calendar

Schedules live reviews, cohort rooms, and the seasonal performance tuning summit.

Milestones

  1. 2019 — First public plan-reading circle hosted out of Seoul with remote listeners dialing in from bus terminals and airport lounges.
  2. 2022 — Introduced cohort notebooks that mirror anonymized production shapes instead of toy star-schema demos.
  3. 2025 — Expanded moderator coverage across APAC evenings so live reviews never depend on a single timezone hero.