PROJECT 1
Advancing Bias-Free Sentiment Analysis: Scaling the SProp GNN to SOTA-Level Performance
AUTHOR: Hubert Plisiecki
AFFILIATION: Stowarzyszenie na rzecz Otwartej Nauki
Modern transformer-based architectures have demonstrated remarkable performance in sentiment analysis but often learn and propagate social biases from their training data. To address this, I previously introduced the Semantic Propagation Graph Neural Network (SProp GNN), a bias-robust alternative that relies exclusively on syntactic structures and word-level emotional cues. While the SProp GNN effectively mitigates biases—such as political or gender bias—and provides greater interpretability, its performance currently falls slightly short of state-of-the-art (SOTA) transformer models. This project aims to advance the SProp GNN by testing and implementing architectural and methodological improvements to elevate its performance to transformer levels and beyond. Proposed enhancements include: (1) developing alternative sentence parsing models and graph setups to better align with the propagation of emotional information through syntactic structures, (2) experimenting with various taxonomies for parts of speech and dependency types, and (3) exploring alternative SProp architectures and conducting extensive hyperparameter optimization. Additional ideas for improvement are also welcome. Achieving SOTA performance while maintaining the model’s ethical and transparent design could establish the SProp GNN as a valuable alternative for sentiment analysis across diverse applications. Results from this hackathon will be shared on the original project’s GitHub repository, with proper attribution for contributions.
MATERIALS:
https://arxiv.org/abs/2411.12493
https://arxiv.org/abs/2407.13891
https://link.springer.com/article/10.3758/s13428-023-02212-3
PROGRAMMING LANGUAGE: Python
WHAT YOU CAN GAIN:
1. Learn Graph Neural Network techniques like syntactic graph creation, custom layers, and attention pooling.
2. Gain experience in ethical AI by reducing biases in sentiment analysis models.
3. Work with NLP pipelines using datasets (e.g., GoEmotions) and tools like spaCy for emotion prediction.
4. Improve skills in model optimization through architecture tuning and hyperparameter exploration.
5. Collaborate on an open-source project and contribute to reproducible, interdisciplinary AI research.
REQUIREMENTS:
At least one of:
1. Experience with Graph Neural Networks
2. Background in Linguistics
3. Experience with Natural Language Processing
4. A different specific but important and useful reason for being a part of the team
We also welcome non-programming linguistics.
NUMBER OF PARTICIPANTS: 2 – 8
PROJECT 2
AI Chart Surgeon: Improving Visualizations, One Graph at a Time
AUTHORS: Piotr Migdał, Katarzyna Kańska
AFFILIATION: independent AI consultant at p.migdal.pl
Good charts present data in a way that is easy to understand and interpret.
We will use modern AI tools to improve existing charts, following the best practices of data visualization.
We will construct a tool that is able to:
– extract data from an existing chart
– suggest appropriate chart types for the data
– create code for a new chart
– generate the new, improved chart
Most scientists are not data visualization experts, so we will create a tool that helps them create better charts. It will provide concrete feedback on their choices, not only to get results but also to teach good practices.
Additionally, many charts are not suitable for republishing – both in terms of their visual quality and copyright restrictions.
We plan to test:
– which types of charts work well for data extraction
– which types of data are good candidates for automatic chart generation
– which AI models work best for each task
We plan to use modern Large Language Models (LLMs) and vision models, such as GPT-4o, Gemini, and Claude.
Any partial solution (e.g., only data extraction or only chart generation) would be a valuable achievement on its own.
MATERIALS:
– [Data better looks naked] https://www.darkhorseanalytics.com/blog/data-looks-better-naked
– [Meaning + Beauty in Data Vis and Data Art] https://lisacharlottemuth.com/2015/12/19/Meaning-and-Beauty-in-Data-Vis/
– Resources from [D3.js workshop at ICM for KFnrD] https://p.migdal.pl/blog/2016/02/d3js-icm-kfnrd
– [An Introduction to ggplot2] https://uc-r.github.io/ggplot_intro
– [ggplot2: Elegant Graphics for Data Analysis] https://ggplot2-book.org/getting-started
Additional inspirations:
– [The Fallen of World War II] http://www.fallen.io/ww2/
– [Graphic Presentation of Data]
https://archive.org/details/graphicpresentat00brinrich/page/18/mode/thumb?view=theater
– [Analiza danych maturalnych z lat 2010-2014] https://github.com/stared/delab-matury
PROGRAMMING LANGUAGES: Python (Pandas, Seaborn, Matplotlib), R (ggplot2), visual interfaces (then – no programming skills needed)
WHAT YOU CAN GAIN:
– Understanding of workflows with AI, including processing images, extracting data in structured formats, and creating code.
– Understanding principles of data visualization and learning good practices for creating charts – both for research and presentations.
REQUIREMENTS:
Interest in creating better charts and using AI tools.
Important: There is a qualification task (email to which you need to send the solution will be provided after completing the registration form)
Find one published chart (from a research paper, presentation, or mainstream media) that you think could be significantly improved.
1. Provide the source of the chart (e.g. link)
2. Explain its shortcomings
3. Suggest a better way to visualize the data – this can be subtle (e.g. different color palette) or more substantial (e.g. different chart type, showing only key data)
Two optional but highly recommended steps:
4. Extract the data (manually, semi-automatically, or using AI tools) and document your process
5. Create an improved chart (preferably using ggplot2, Pandas, or Seaborn, but any tool is acceptable) – include both code and result
We also welcome non-programming people involved in data analysis who creates or uses charts; UI/UX designers, artists.
NUMBER OF PARTICIPANTS: 4 – 16
PROJECT 3
Automated Motion Tracking for Early Neurological Assessment in Infants Based on the Hammersmith Neonatal and Infant Neurological Examination (HINE)
AUTHOR: Paulina Domek
AFFILIATION: SWPS University
This preliminary research project aims to explore the potential of automated motion tracking systems in supporting the Hammersmith Neonatal and Infant Neurological Examination (HINE). We will attempt to develop a computer vision-based approach to analyze video recordings of infant assessments, with the goal of extracting quantifiable movement features that could assist in clinical evaluation. The proposed system will combine motion tracking technology with machine learning algorithms to potentially provide objective measurements of infant motor performance.
Our methodology will involve collecting video recordings of HINE assessments and applying pose estimation algorithms to track infant movements. We plan to use frameworks such as OpenPose or DeepLabCut for movement analysis, focusing on key features such as spontaneous movement patterns, posture, and reflex responses. The project will explore the feasibility of machine learning models in distinguishing between different movement patterns, while acknowledging the complexity and variability inherent in infant motor development.
If successful, this tool could complement traditional HINE assessments by offering additional data points for clinicians to consider when screening for early signs of neurological disorders. The potential benefits include enhanced objectivity in assessment, improved documentation of infant movement patterns, and the possibility of identifying subtle motor abnormalities that might warrant further clinical investigation.
We recognize the challenges in developing such a system, including the need for extensive validation, the complexity of infant movement patterns, and the importance of maintaining the central role of clinical expertise in assessment. This exploratory project represents an initial step toward combining modern computer vision techniques with established clinical practices in infant neurological assessment, potentially contributing to the broader field of technology-assisted pediatric healthcare.
MATERIALS:
https://www.youtube.com/watch?v=FnvbsqjYKBc
Romeo, D. M., Cowan, F., Haataja, L., Ricci, D., Pede, E., Gallini, F., … & Romeo, M. (2020). Hammersmith infant neurological examination for infants born preterm: predicting outcomes other than cerebral palsy. Developmental Medicine &Amp; Child Neurology, 63(8), 939-946. https://doi.org/10.1111/dmcn.14768
Souza, T. G. d., Bagne, E., Mizani, R. M., Rotob, A. A., Gazeta, R. E., Zara, A. L. d. S. A., … & Passos, S. D. (2022). Accuracy of the hammersmith infant neurological examination for the early detection of neurological changes in infants exposed to zika virus. Medicine, 101(25), e29488. https://doi.org/10.1097/md.0000000000029488
Maćkowska, K., Raźniewska, M., Siwiec, G., Skrzypczak, J., Ostrzyżek-Przeździecka, K., & Gąsior, J. S. (2021). Wykorzystanie skali hammersmith infant neurological examination u niemowląt w celu przewidywania lub potwierdzenia wystąpienia mózgowego porażenia dziecięcego – systematyczny przegląd piśmiennictwa. Child Neurology, 29(59), 57-65. https://doi.org/10.20966/chn.2020.59.470
Apaydın, U., Erol, E., Yıldız, A., Yıldız, R., Acar, Ş. S., Gücüyener, K., … & Elbasan, B. (2021). The use of neuroimaging, prechtl’s general movement assessment and the hammersmith infant neurological examination in determining the prognosis in 2-year-old infants with hypoxic ischemic encephalopathy who were treated with hypothermia. Early Human Development, 163, 105487. https://doi.org/10.1016/j.earlhumdev.2021.105487
Harpster, K., Merhar, S. L., Illapani, V. S. P., Peyton, C., Kline‐Fath, B. M., & Parikh, N. A. (2021). Associations between early structural magnetic resonance imaging, hammersmith infant neurological examination, and general movements assessment in infants born very preterm. The Journal of Pediatrics, 232, 80-86.e2. https://doi.org/10.1016/j.jpeds.2020.12.056
Pietruszewski, L., Moore‐Clingenpeel, M., Moellering, G. C., Lewandowski, D. J., Batterson, N., & Maitre, N. L. (2022). Predictive value of the test of infant motor performance and the hammersmith infant neurological examination for cerebral palsy in infants. Early Human Development, 174, 105665. https://doi.org/10.1016/j.earlhumdev.2022.105665
Luke, C., Mick-Ramsamy, L., Bos, A. F., Benfer, K. A., Bosanquet, M., Gordon, A., … & Boyd, R. N. (2024). Relationship between early infant motor repertoire and neurodevelopment on the hammersmith infant neurological examination in a developmentally vulnerable first nations cohort. Early Human Development, 192, 106004. https://doi.org/10.1016/j.earlhumdev.2024.106004
PROGRAMMING LANGUAGES: Motion Tracking Software: OpenPose, DeepLabCut, or similar pose estimation frameworks. Data Processing Tools: Python, OpenCV, NumPy for video preprocessing and feature extraction.
WHAT YOU CAN GAIN:
– Hands-on Experience – Work with motion tracking, AI, and video analysis in a real-world neurobiology application.
– Interdisciplinary Collaboration – Engage with experts from neuroscience, AI, psychology, and design.
– Technical Skills – Learn about pose estimation, data processing, and potential AI applications in healthcare.
– Problem-Solving & Research – Explore the challenges of automating medical assessments.
– Impactful Work – Contribute to a project that could improve early diagnosis of neuromotor disorders.
REQUIREMENTS:
Education & Background:
– Students/researchers in neuroscience, AI, computer science, or biomedical engineering (preferred).
– Open to anyone with a strong interest in neurotechnology.
Technical Skills:
– Python + Visual Studio (required) for data processing & motion tracking.
– Experience with OpenCV, OpenPose, DeepLabCut (a plus).
– (Optional) Familiarity with machine learning (TensorFlow/PyTorch).
Language & Communication:
– English (B2+) – ability to read research, collaborate, and document findings.
– Strong teamwork & problem-solving mindset.
Commitment:
– Availability for active participation.
– Interest in applying AI to neuroscience.
Our project welcomes non-programmers from diverse backgrounds for example: Biologists, Medical Professionals, Psychologists, Neuroscientists, Artists, Designers, Ethics Communicators, Science Communicators
Motion tracking for neurological assessment is an interdisciplinary challenge, and we need expertise beyond coding.
NUMBER OF PARTICIPANTS: 4 – 10
PROJECT 4
Eye orbit segmentation and eye movement detection via fMRI
AUTHORS: Cemal Koba, Jan Argasinski
AFFILIATION: Sano Center for Computational Medicine
In our previous research, we demonstrated that the mean fMRI time series from eye regions correlate with spontaneous brain activity in visual and somatomotor regions. We now aim at refining our analyses by better defining the eye movements rather than using mean signal from the whole eye region. To achieve this, we plan to create an algorithm that processes 4D data (3D spatial data + time) from eye regions. More specifically, we want to automatize the following steps:
– Locating and isolating eye orbits in a given 4D fMRI image
– Identifying the initial position of the eye
– Reporting the movement parameters (such as translation and rotation) over time.
– Reporting summary statistics such as relative and absolute motion, coherence between both eyes, and displacement
– Optional: Find the neural correlates of each summary statistic
Although similar algorithms already exist, they are often deep-learning-based and trained on specific populations. Our goal is to develop an algorithm that operates solely on the subject’s available data, without requiring a pre-trained model, and is adaptable to specific clinical populations.
MATERIALS:
https://direct.mit.edu/netn/article/5/2/451/97539/Spontaneous-eye-movements-during-eyes-open-rest
https://pubmed.ncbi.nlm.nih.gov/27411785/
https://github.com/DeepMReye/DeepMReye
PROGRAMMING LANGUAGES: Python
WHAT YOU CAN GAIN:
Experience in handling 4D dataset, image processing algorithms to isolate a certain region from the rest of the image, brain function mapping using seed-based connectivity.
REQUIREMENTS:
– Familiarity with coding in python
– Familiarity with image processing algorithms is appreciated, but not a must
– Interest in processing fMRI data
We also welcome non-programming people who has knowledge in ocular systems is appreciated as it will make it easier to define the eye movements.
NUMBER OF PARTICIPANTS: 3 – 8
PROJECT 5
Can mental health be quantified? – preliminary project of a mobile app for patients receiving psychiatric care
AUTHOR: Sylwia Adamus
AFFILIATION: University of Warsaw, Faculty of Physics; Medical University of Warsaw, Faculty of Medicine
The popularity of healthcare mobile apps has been rising constantly, with a variety of them available for each medical specialty. The ones dedicated to patients receiving psychiatric care are however often lacking in necessary functionalities and focus on tracking one’s symptoms by a descriptive analysis.
This project was conceptualized during the 12th edition of Bravecamp, qualifying for its finale. We will brainstorm an app that would allow for simultaneous tracking of both medications and mental health symptoms, focusing on a quantitative approach inspired by scales used in psychiatry and psychology. The project will include generating test data, programming basic functionalities, and visualizing the output that a potential app user would receive.
Beginners in programming are welcome, what matters the most is your creativity!
MATERIALS:
https://doi.org/10.2196/mental.4984
PROGRAMMING LANGUAGES: Python
WHAT YOU CAN GAIN:
The participants can gain insight into outpatient psychiatric care in Poland, how patients are evaluated during medical consultations and how the process of designing a mobile app looks like.
REQUIREMENTS:
– basic Python programming skills
– basic statistical analysis skills
– basic data visualisation skills
– communicative English
We also welcome a non-programming graphic designer.
NUMBER OF PARTICIPANTS: 4 – 6
PROJECT 6
Second Generation Diffusion Models of Brain Dynamics via Flow Matching
AUTHOR: Adam Sobieszek
AFFILIATION: University of Warsaw
This project focuses on new methods in modelling EEG brain dynamics by combining flow matching, a recent generalization of diffusion models, with transformer-based representations of neural signals. While traditional diffusion models have shown promise in generative tasks, flow matching offers a more flexible framework that can be formulated as neural differential equations, enabling a wider range of applications beyond simple generation. We will leverage this flexibility to test multiple challenges in neural signal processing and brain-computer interface (BCI) applications. We will work on models that can perform tasks such as source separation in the signal domain, as well as flow matching operating in the representation space of transformer models trained on neural data. Flow matching in representation space could aid controlled signal generation by incorporating additional information about represented brain processes. We will focus on data from language-related BCI tasks, where we will work on detecting language-related brain activity. Our project will explore how flow matching can be used to model the complex trajectories of brain states during cognitive tasks, while maintaining the interpretability advantages of transformer-based representations. If successful, this approach could be worked on after the event to build a promising framework for modeling and manipulating neural signals in research settings.
MATERIALS:
1. Flow matching used successfully in audio signal generation: https://arxiv.org/abs/2406.18009
2. Good explanation of Flow matchingu by Yannic: https://www.youtube.com/watch?v=7NNxK3CqaDk&ab_channel=YannicKilcher
3. A deep dive into the mathematics of Flow Matching: https://arxiv.org/abs/2302.00482
4. Transformer-based EEG representations: https://arxiv.org/abs/2405.18765
PROGRAMMING LANGUAGES: Python
WHAT YOU CAN GAIN:
Deeper understanding of new deep learning methods, experience applying them to brain signals
REQUIREMENTS:
If coding: knowledge of python and pytorch or EEG processing
If not coding: Understanding of brain rythms and interest in language processing or good understanding of mathematics used in flow matching”
NUMBER OF PARTICIPANTS: 4 – 13