About

Learn more about me

Background

I study language because it represents the most sophisticated mapping we have between cognitive states and external information. My goal is to move beyond heuristic-based decoding toward a formal understanding of representational alignment between biological and artificial neural networks.

Right now, I'm an NLP master's student at UC Santa Cruz (Fiat Slug!), where my work centers on how deep learning models learn and represent meaning. In practice, this has meant building span-based systems for semantic role labeling, developing custom pipelines for microelectrode array (MEA) recording systems, and using evaluation as a way to probe what models actually learn about meaning.

Before graduate school, I studied Economic Policy and Language & Mind at New York University (Go Violets!). At NYU, I became fascinated by how language shapes cognition and how computational systems might approximate parts of that process. I've since worked on applied machine learning in various sectors, and have learned how to turn messy real-world data into usable models.

These experiences pulled my interests toward deeper questions about language, thought, and intelligence. Today, my long-term research direction sits at the intersection of NLP, computational neuroscience, and brain–computer interfaces. My work is focused on building real-time interfaces for neural recording systems and exploring how biological signals can be translated into structured, machine-readable representations.

Ultimately, I'm interested in connecting language models with neural signals of thought, and using that connection to better understand both artificial and biological intelligence.

Questions I'm chasing

Language & Learning

I'm drawn to psycholinguistics and computational neuroscience:

  • How are linguistic processes represented in the brain?
  • How does language shape conscious experience?
  • What would it mean for a machine to truly "understand" meaning?

Brain & Signal

I'm building and studying neural interfacing algorithms:

  • How can we reliably record and stimulate specific neural populations?
  • What are the tradeoffs between invasive and non-invasive recording modalities?
  • How much structure is already present in neural signals before modeling?

Systems & Data

I care about pipelines as much as theory:

  • How do we model cognitive processes with real data?
  • Which architectures best support neural decoding and representation learning?
  • How do we design systems that stay interpretable as they scale?
Current Research Readings

Neural Decoding & Brain-Model Alignment

Multimodal Architectures & Optimization

Datasets, Biology & Theory

Get in touch

I'm always happy to talk about NLP, neurotechnology, and strange questions about language and mind. Feel free to reach out if you're working on something interesting or just want to compare notes.

Back to home

© 2026 Dom Marhoefer