<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://pantelisantonoudiou.github.io/feed.xml" rel="self" type="application/atom+xml" /><link href="https://pantelisantonoudiou.github.io/" rel="alternate" type="text/html" /><updated>2025-07-22T13:39:34+00:00</updated><id>https://pantelisantonoudiou.github.io/feed.xml</id><title type="html">Pantelis Antonoudiou</title><subtitle>Research Scientist, Neuroscience, Data Science, ML.</subtitle><author><name>Pantelis Antonoudiou, PhD</name></author><entry><title type="html">Mechanistic Insights into How Shallow CNNs Classify EEG Seizure Activity</title><link href="https://pantelisantonoudiou.github.io/ongoing%20projects/cnn-MI.md/" rel="alternate" type="text/html" title="Mechanistic Insights into How Shallow CNNs Classify EEG Seizure Activity" /><published>2025-07-08T00:00:00+00:00</published><updated>2025-07-08T00:00:00+00:00</updated><id>https://pantelisantonoudiou.github.io/ongoing%20projects/cnn-MI.md</id><content type="html" xml:base="https://pantelisantonoudiou.github.io/ongoing%20projects/cnn-MI.md/"><![CDATA[<p>Deep learning is increasingly used to support automated event detection, with CNNs achieving strong performance on EEG classification tasks. However, their black-box nature limits their acceptance in neuroscience and clinical settings, where transparency and interpretability remain critical, and manual curation continues to serve as the gold standard.</p>

<p>In this project, we are investigating the internal representations of a shallow CNN trained to detect electrographic seizures. We are specifically interested in understanding how the model makes decisions at the single neuron and/or ensemble level and going beyond standard input attribution methods.</p>

<p><a href="/assets/images/cnn_interp.png">
  <img src="/assets/images/cnn_interp.png" alt="CNN Interpretability Pipeline" style="max-width: 600px; margin-bottom: 1rem; border-radius: 6px;" />
</a></p>
<p style="font-size: 0.9rem; color: #666;">
  <em>Figure: A) Model architecture, B) Input types, C) Model schematic, D) Separation of activations using PCA (nPCs=100) with UMAP, E) Ensemble identification strategy schematic, F) Silencing of seizure ensembles reduces model prediction to chance only for seizure traces.</em>
</p>

<p><strong>Some early findings include:</strong></p>

<p>1) Separation of seizure and non-seizure traces begins from the first convolutional layer.</p>

<p>2) Intermediate layers form sparse representations with high-redundancy across layers.</p>

<p>3) Silencing of class-specific ensembles predictably alters model decision outcomes.</p>

<h3 id="coming-soon">Coming soon</h3>
<p>Stay tuned for the arXiv paper where we will be exploring human interpretable features to explain model decisions based on inputs, dense neuron clusters, and more.</p>

<hr />]]></content><author><name>Pantelis Antonoudiou, PhD</name></author><category term="Ongoing Projects" /><category term="Deep Leaning" /><category term="EEG" /><category term="Mechanistic Interpretability" /><summary type="html"><![CDATA[Investigating how shallow convolutional neural networks internally represent and classify EEG segments in a seizure detection task.]]></summary></entry></feed>