DigitalNeuroNews
The DigitalNeuroNews (DNN) is our program newsletter — a curated source of insights, research breakthroughs, career opportunities, and updates bridging neuroscience, AI, and technology.
January 2026 — Designing Systems We Must Live Inside
This edition focuses on a single underlying concern: how to design, understand, and inhabit complex adaptive systems responsibly. As intelligent technologies become more embedded in human environments, the central challenge shifts from building more powerful models to understanding how these systems interact with people, institutions, and contexts over time.
Cybernetic Systems Design in Education
In the Fall semester of 2025, a course in Cybernetic Systems Design was developed and delivered as part of the Digital Neuroscience curriculum. The course addresses a structural challenge in interdisciplinary training: while students acquire advanced methodological skills in neuroscience, AI, and data analysis, they often lack an integrative framework for understanding complex intelligent systems across domains.
Students were introduced to a common conceptual language — sensors, comparators, controllers, actuators, feedback, stability, adaptation, and failure — applicable to biological, artificial, and socio-technical systems. A semester-long design and simulation project required students to conceptualize and implement a cybernetic system capable of sensing its environment, comparing internal states to goals, exerting control, and modifying its behavior over time.
An important pedagogical insight: systems thinking is not innate, but can be developed through structured exposure. As the semester progressed, students demonstrated increased ability to reason in terms of feedback loops, delays, trade-offs, and unintended consequences.
Visionary Lectures Series
The Visionary Lectures series was conceived to bridge academia and industry within the Digital Neuroscience program. Fourteen speakers from academia, industry, and entrepreneurial ecosystems shared not only their scientific expertise, but also their trajectories, trade-offs, and lessons learned at the interface of research and application.
Students were exposed to fundamental researchers pushing the boundaries of brain science, industry scientists deploying AI and neurotechnology at scale, clinicians translating discoveries into interventions, and entrepreneurs navigating uncertainty to bring ideas to market.
Building on this momentum, next year the Visionary Lectures will also be opened to students from the EBR Master program, creating a broader interdisciplinary audience.
Field Developments
Neuroscience: The field is shifting from task-specific analyses to scalable, generalizable models. Highlights include an EEG Foundation Model trained on 60,000 hours across 92 datasets, Plexus for neuronal calcium imaging phenotyping, and PyBispectra for electrophysiology analysis.
Artificial Intelligence: Progress is now driven by controllability, multimodality, and workflow integration. Notable releases include Claude for Life Sciences, Qwen3-Omni (unified text/image/audio/video), and Context Engineering guidelines from Anthropic.
Robotics & Embodied Systems: Intelligence emerges through real-time interaction with physical environments. Highlights include NVIDIA's PyCuVSLAM, Meta Aria Gen 2, and VoxeLite Haptics achieving human-resolution touch sensing.
May 2025 — Thinking in Place: How Environments Write the Mind
This issue introduces the new compulsory course, Cybernetic System Design: Bridging Neuroscience, AI, and Society, and explores how environments shape cognition — from the body as the first structured space the mind inhabits, through garments, rooms, corridors, and cities.
The Essay: Environments and Cognition
Cognition does not begin in the brain. It begins in the body. As Merleau-Ponty wrote, "The body is our general medium for having a world." From this embodied foundation, we move outward into increasingly complex infrastructures. Routines emerge from repeated interaction with space. Thought becomes clothed in habit. Habit, in turn, is anchored in place.
Architecture is not a backdrop. It instructs. It scaffolds behavior, biases perception, and stores memory. Urban environments expand this logic — their grids encode collective memory, social expectation, and cognitive tempo. As Calvino wrote, "The city does not tell its past, but contains it like the lines of a hand."
Related research highlights include the International Conference on Embodied Cognitive Science (ECogS 2025), Taniguchi et al.'s Quad-Process Theory of cognition (System 0/1/2/3), and neuroarchitecture studies showing how curved forms and natural vistas measurably reduce stress and increase creativity.
Program Announcements
- UniFr x OkazoLab: New partnership bringing the EventIDE experimental design platform to our research community — sub-millisecond stimulus presentation with EEG, eye tracking, GSR, and VR integration.
- UniFr x VirtuaLeap: Access to EnhanceVR, a virtual reality platform for assessing and training cognitive functions through immersive gamified tasks.
April 2025 — AI at the Crossroads
An intentional breath between rapid pulses of progress. This edition explores the deepening entanglement between mind and machine — from AI's shifting narrative to the rise of neuroadaptive technologies that sense and respond to our thoughts.
AI: Navigating Myth, Reality, and Responsibility
Our collective vision of AI is maturing, shifting from mysticism and fear toward practical considerations of responsibility, ethics, and societal impact. Key perspectives examined:
- Narayanan & Kapoor (AI Snake Oil) challenge viewing AI as inherently transformative, proposing instead that these are statistical systems dependent on rigorous validation
- Stanford's 2025 AI Index Report captures explosive advancement but frames progression as inevitable rather than actively shaped by governance
- Gry Hasselbalch emphasizes embedding AI within frameworks that prioritize human values — creativity, social resilience, and critical defiance
- Dario Amodei (Machines of Loving Grace) envisions breakthroughs in drug discovery and precision medicine, tempered by interpretability and bias challenges
- Demis Hassabis discusses AI's dual nature — unprecedented problem-solving potential balanced against risks from misuse
Together, these views form a composite image of AI not as a monolith but as a mirror reflecting our institutions, values, fears, and ambitions.
Neuroadaptive Technology Conference (NAT'25)
The NAT'25 conference in Berlin showcased the evolution from simple interfaces toward deeply integrated cognitive partnerships. Neuroadaptive technologies dynamically adapt their behavior by decoding user-specific neurophysiological signals in real time.
Prof. Thorsten Zander presented passive brain-computer interfaces (pBCIs) that allow technology to "listen" to implicit mental states — cognitive load, stress, attention, emotional engagement — without explicit commands. Zander Labs demonstrated the Zypher EEG suite and the NAFAS project, integrating EEG data within vehicle automation systems to modulate automation based on the driver's cognitive state.
At the University of Fribourg, our research extends these concepts into practical applications focused on optimizing human performance, cognitive resilience, and emotional well-being.
Research Updates
- Adversarial Testing of Consciousness Theories: A landmark collaboration testing IIT vs. GNWT with 256 participants and multimodal neuroimaging — neither theory passed all its tests, setting a new precedent for adversarial cooperation in science
- Skill Retention in Aging: Longitudinal data shows skills improve into the forties (literacy peaks at 46, numeracy at 41) and decline is predominantly among individuals with below-average skill usage — use it, don't lose it
- New Tools: Graphene biosensors 5x more sensitive than ELISA, fabric-based sensors for hand muscle activity, washable dry EEG electrodes, and RelCon — a motion foundation model trained on 1 billion wearable segments
March 2025 — Multimodality in Neuroscience and Human-Machine Interfaces
This issue dives deep into multimodal neuroscience — what it reveals about perception, prediction, and the constructed self — and surveys the latest in BCIs, adaptive systems, and Swiss AI infrastructure.
The Brain, the Body, and the Fiction of Now
We tend to assume that perception is immediate — that we see the world as it is, in real time. But this is a convenient illusion. What we experience as "the present" is in fact a reconstruction. Our brains receive delayed, incomplete, and noisy sensory data, which they rapidly integrate, interpret, and adjust based on prior knowledge and statistical inference.
With advances in multimodal neuroscience — integrating EEG, eye tracking, heart rate variability, and electrodermal activity — we are beginning to trace how predictive processes unfold across time, brain regions, and physiological systems.
Prediction also plays a central role in emotional experience. Emotions are not simply reactions to external stimuli, but anticipatory states shaped by prior learning and internal context. Multimodal systems allow for finer resolution of these affective processes, opening the door to neurofeedback and affective computing applications for enhancing self-regulation and mental health.
Research & Technology Highlights
Brain-Computer Interfaces:
- Neuralink's human trials: three patients implanted, enabling digital device control through thought
- UCSF researchers enabled a paralyzed individual to control a robotic arm through thought for seven consecutive months
- Synchron and Nvidia's "Chiral" enables paralysis patients to manage digital tasks through neural signals
Innovative Interfaces:
- Meta's "Brain2Qwerty" decodes speech from brain activity with up to 80% accuracy non-invasively
- Canaery introduced a nose-computer interface to decode odor signals
- Meta's Aria Gen 2 supports advanced machine perception and robotics research
- Google's AI Co-Scientist (Gemini 2.0) generates novel research hypotheses
Swiss AI Infrastructure: Open Brain Institute, Swiss Data Science Center (SDSC), IDSIA, Idiap Research Institute, Swiss National Supercomputing Centre (CSCS) with the new "Alps" supercomputer
February 2025 — Your Master's Thesis and the New Era of Neuroscience
Congratulations on completing another semester. This issue focuses on the Master's thesis as the flagship of your studies, presents a curated portfolio of inspiring thesis projects, and explores the transformative era unfolding in neuroscience.
Master Project Portfolio
Available projects span from conceptual to ongoing stages:
- DysCover: Eye-tracking and tablet-based diagnostic tool for reading difficulties in pre-schoolers
- SHAMS: Smart alarm clock using ultrasound, LIDAR, and ML to assess sleep phases and optimize wake-up timing
- MAP-Drive: Perception-action sequence classification using simulated reality in a multimodal framework
- VirtuaLeap: Impact of VR on behavioral and cognitive training through neurobehavioral evaluation
- Amplify: Digital art tools using neurobehavioral data to modulate sound, light, and matter
- Brain-GPT: AI-driven UI and content modulation integrated with cognitive workload metrics
- Adaptive Living Environment Platform: AI-driven, wearable-based platform for optimizing well-being
- AI-Powered Emotional & Stress Tracking: EDA-enabled phone case for real-time emotional state tracking
- Optimizing ADHD Assessments: Enhanced neuropsychological assessments using multimodal neurobehavioral tracking
- Human or AI?: Neurobehavioral correlates underlying human vs. AI-generated text and image production
The New Era of Neuroscience
We are entering a transformative era in neuroscience — one that deepens our understanding of how brains work, how behaviors emerge, and how this knowledge can address pressing medical and well-being challenges.
Over the past decade, machine learning and generative AI have been increasingly integrated with brain recording, uncovering complex multidimensional interactions within neural networks. Pioneering companies like Neuralink and Blackrock Neurotech are breaking new ground in brain stimulation and interface technologies. Progress in edge computing enables studying brain function in natural settings using synchronized multi-sensor data streams.
Innovative products include OpenBCI's GALEA (multimodal wearable recording), Emotiv's EEG earphones, and AR smart glasses from Microsoft, Meta, and Magic Leap. Global consortiums like the International Brain Laboratory and the ENIGMA Consortium are amplifying data sharing and collaboration.
A crucial question emerges: as enhanced access to personal neural data makes it possible not only to detect underlying biases or disorders but also to influence choices, lifestyle, and behavior — who are the experts we want guiding this innovation?
January 2025 — Inaugural Issue
Welcome to the first edition of DigitalNeuroNews! This newsletter provides updates about the Digital Neuroscience program alongside curated insights, opportunities, and developments bridging neuroscience, AI, and technology.
A Message from the Program Coordinator
Hello, I am Samy Rima, the new program coordinator and study advisor for the Digital Neuroscience Program. This newsletter aims to be your go-to resource as you navigate your academic and professional journey — showcasing groundbreaking research, spotlighting career opportunities, and sharing program reminders.
Research Spotlight
Translating Brain Signals into Speech: Researchers at UC Davis Health developed a brain-computer interface that translates brain signals into speech with 97% accuracy. Aimed at assisting individuals with conditions like ALS, the system allows users to communicate their thoughts within minutes of activation.
Art Meets Neuroscience — Turning Thoughts into Visuals: The IMAGINE project, a collaboration between the Obvious collective and neuroscientists, uses fMRI imaging to capture brain activity while artists envision paintings. Advanced algorithms reconstruct these mental images into visual art, merging neuroscience and AI in a groundbreaking way.
Career Advice: Don't Wait — Take Initiative
Your interdisciplinary background is your superpower. Here's how to start:
- Identify a Real-World Problem that resonates with you — human augmentation, healthcare diagnostics, autonomous vehicles, dynamic architectural design, crowd management
- Assess Your Readiness — map your skills and professional network
- Explore Market Opportunities — research labs, companies, and funding sources
- Create a Concept — develop a project addressing the problem you've identified
- Leverage Funding — explore schemes like Innosuisse
- Be Public — showcase projects through an online portfolio and publications
Whether you lean toward a PhD or industry, position yourself as an expert. Build and showcase a portfolio. Network boldly but remain humble and honest. Be an idealist, but take concrete steps to make your vision a reality.