SFI Visual Intelligence
SFI Visual Intelligence is a Norwegian Center for Research-based Innovation that aims at unlocking the potential of visual intelligence across our main innovation areas medicine and health, marine science, energy sector, and earth observation by enabling the next generation deep learning methodology for extracting knowledge from complex image data.
SFI Visual Intelligence conducts fundamental research within deep learning for producing new solutions, innovations, and new reliable technologies within the aforementioned innovation areas.
As we publish new research, host events and give presentations of our work we will post our video highlights on this channel. Please subscribe to follow our activity.
Мониторинг временных рядов с использованием моделей Vision Language: Håkon Nese (Aker BP)
Прототипная модель для сегментации медицинских изображений с малым количеством кадров: Хёнджи Ким...
The Role of Computational Pathology in Tomorrow’s Medicine: Geert Litjens (Radboud University)
In Search of Hidden Talents: Emergence in Foundation Model: Oskar Skean (Unviersity of Kentucky)
Evaluating Foundation and Agentic Models in the Age of Trust: Srishti Gautam (Microsoft)
Diffusion Model Meets XAI: Counterfactual Generation for Model Debugging: Nina Weng (DTU)
The Compression Paradox: Why AI and Humans See the World Differently: Ravid Shwartz Ziv
FM4CS: A Versatile Foundation Model for Earth Observation Applications: Arnt-Børre Salberg (NR)
Addressing Label Shift in Distributed Learning via Entropy Regularization: Zhiyuan Wu (UiO)
Large Language Models Under the Hood: Language Technology Group, UiO
Explainable Methods for Computer-Aided Diagnosis: Anuja Vats (NTNU)
Responsible and Explainable Artificial Intelligence: Virginia Dignum & Leila Methnani
Structure-Preserving Machine Learning for Physical Systems: Sølve Eidnes (SINTEF Digital)
Aleatoric and Epistemic Uncertainty in Statistics and Machine Learning: Willem Waegeman
Salmon tracking for improved salmon welfare observation: Espen Berntzen Høgstedt (NTNU)
Principles for a Self-Explainable Model Through Information Theoretic Learning: Changkyu Choi (UiT)
Satellite Imagery-Based Deep Learning for Sustainable Development: Donghyun Ahn & Jeasurk Yang
Layer-wise Analysis of Transformer Models in Vision and Audio Processing: Teresa Dorszewski (DTU)
Using conformal prediction for novelty detection in microfossil analysis: Iver Martinsen (UiT)
Representation Learning of Visual Features Through Self-Supervision: Thalles Silva
Equivariant Self-Supervision: Exploiting Inductive Biases of Capsule Networks: Aiden Durrant
Towards Efficient Geometric Representation: Young Min Kim (Seoul National University)
Modular Superpixel Tokenization in Vision Transformers: Marius Aasan (University of Oslo)
A Closer Look at Cancer Classification in Histopathology: Dhananjay Tomar (UiO)
Neural Explanation Masks for Representation Learning: Bjørn Leth Møller (Copenhagen University)
Anomaly Detection with Conditioned Denoising Diffusion Models: A. Mousakhan (University of Freiburg)
Benefits of Anatomical Motion Mode Imaging in LV Automatic Measurement: Durgesh K. Singh (UiT)
FreqRISE: Explaining time series using frequency masking: Thea Brüsch (DTU Compute)
Annotation-Free Feature Learning for Improved Acoustic Target Classification: Ahmet Pala (UiB)
Towards Explainable AI 2.0 with Concept-based Explanations: Reduan Achtibat & Maxmilian Dreyer