Workshop
Workshop on Sparsity in LLMs (SLLM): Deep Dive into Mixture of Experts, Quantization, Hardware, and Inference
Tianlong Chen · Utku Evci · Yani Ioannou · Berivan Isik · Shiwei Liu · Mohammed Adnan · Aleksandra I. Nowak · Ashwinee Panda
Hall 4 #7
Sat 26 Apr, 6 p.m. PDT
Large Language Models (LLMs) have emerged as transformative tools in both research and industry, excelling across a wide array of tasks. However, their growing computational demands especially during inference—raise significant concerns about accessibility, environmental sustainability, and deployment feasibility. At the same time, sparsity-based techniques are proving critical not just for improving efficiency but also for enhancing interpretability, modularity, and adaptability in AI systems. This workshop aims to bring together researchers and practitioners from academia and industry who are advancing the frontiers of sparsity in deep learning. Our scope spans several interrelated topics, including Mixture of Experts (MoEs), LLM inference and serving, network pruning, sparse training, distillation, activation sparsity, low-rank adapters, hardware innovations and quantization. A key objective is to foster connections and unlock synergies between traditionally independent yet highly related research areas, such as activation sparsity and sparse autoencoders (SAEs), or quantization and KV cache compression. Rather than focusing solely on efficiency, we aim to explore how sparsity can serve as a unifying framework across multiple dimensions of AI—driving advances in interpretability, generalization, and system design. By facilitating the fusion of ideas from different topics, the workshop will create new opportunities for innovation. We encourage participants to think beyond traditional constraints, exploring how different forms of sparsity can inform each other and yield new algorithms. Whether the goal is faster inference, modular architectures, or more interpretable models, our aim is to catalyze research that deepens the integration of sparsity within AI.
Schedule
Sat 6:00 p.m. - 6:05 p.m.
|
Opening
(
Intro
)
>
SlidesLive Video |
Yani Ioannou · Aleksandra Nowak · Mohammed Adnan · Shiwei Liu · Utku Evci · Ashwinee Panda 🔗 |
Sat 6:05 p.m. - 6:30 p.m.
|
Invited Talk: Dan Alistarh (ISTA)
(
Invited Talk
)
>
SlidesLive Video |
Dan Alistarh 🔗 |
Sat 6:30 p.m. - 6:45 p.m.
|
Oral #1: Parameters vs FLOPs: Scaling Laws for Optimal Sparsity for Mixture-of-Experts Language Models
(
Oral
)
>
link
SlidesLive Video |
Vimal Thilak 🔗 |
Sat 6:45 p.m. - 7:00 p.m.
|
Oral #2: Matryoshka Quantization
(
Oral
)
>
link
SlidesLive Video |
Pranav Nair 🔗 |
Sat 7:00 p.m. - 7:30 p.m.
|
Mentoring + Coffee Break
|
🔗 |
Sat 7:30 p.m. - 7:55 p.m.
|
Invited Talk: Yuandong Tian (Meta)
(
Invited Talk
)
>
SlidesLive Video |
Yuandong Tian 🔗 |
Sat 7:55 p.m. - 8:10 p.m.
|
Oral #3: QuEST: Training Accurate LLMs over Highly-Compressed Weights and Activation
(
Oral
)
>
link
SlidesLive Video |
Andrei Panferov 🔗 |
Sat 8:09 p.m. - 9:10 p.m.
|
Poster Session #1
(
Poster Session
)
>
|
🔗 |
Sat 8:10 p.m. - 9:10 p.m.
|
Mixture-of-Mamba: Enhancing Multi-Modal State-Space Models with Modality-Aware Sparsity ( Poster ) > link | Victor Weixin Liang · Junhong Shen · Genghan Zhang · Ning Dong · Luke Zettlemoyer · Lili Yu 🔗 |
Sat 8:10 p.m. - 9:10 p.m.
|
ChamaleonLLM: Batch-Aware Dynamic Low-Rank Adaptation via Inference-Time Clusters ( Poster ) > link | Kamer Yuksel · Hassan Sawaf 🔗 |
Sat 8:10 p.m. - 9:10 p.m.
|
Exploring the dual lottery ticket hypothesis in finetuning through specialised sparsification ( Poster ) > link | Sampreeth R S · Arindam Biswas · Pabitra Mitra · Biswajit Basu 🔗 |
Sat 8:10 p.m. - 9:10 p.m.
|
Low-rank Adapting Models for Sparse Autoencoders ( Poster ) > link | Matthew Chen · Josh Engels · Max Tegmark 🔗 |
Sat 8:10 p.m. - 9:10 p.m.
|
Scaling Sparse Feature Circuits For Studying In-Context Learning ( Poster ) > link | Dmitrii Kharlapenko · Stepan Shabalin · Arthur Conmy · Neel Nanda 🔗 |
Sat 8:10 p.m. - 9:10 p.m.
|
Sparse Spectral Training and Inference on Euclidean and Hyperbolic Neural Networks ( Poster ) > link | Jialin Zhao · Yingtao Zhang · Xinghang Li · Huaping Liu · Carlo Vittorio Cannistraci 🔗 |
Sat 8:10 p.m. - 9:10 p.m.
|
LLMs Know What to Drop: Self-Attention Guided KV Cache Eviction for Efficient Long-Context Inference ( Poster ) > link | Guangtao Wang · Shubhangi Upasani · Chen Wu · Darshan Gandhi · Jonathan Li · Changran Hu · Bo Li · Urmish Thakker 🔗 |
Sat 8:10 p.m. - 9:10 p.m.
|
Zeroth-Order Adaptive Neuron Alignment Based Pruning without Re-Training ( Poster ) > link | Elia Cunegatti · Leonardo Lucio Custode · Giovanni Iacca 🔗 |
Sat 8:10 p.m. - 9:10 p.m.
|
FASTER, CHEAPER, JUST AS GOOD: COST- AND LATENCY-CONSTRAINED ROUTING FOR LLMS ( Poster ) > link | Javid Lakha · Minlan Yu · Rana Shahout 🔗 |
Sat 8:10 p.m. - 9:10 p.m.
|
LEXICO: Extreme KV Cache Compression via Sparse Coding over Universal Dictionaries ( Poster ) > link | Junhyuck Kim · Jongho Park · Jaewoong Cho · Dimitris Papailiopoulos 🔗 |
Sat 8:10 p.m. - 9:10 p.m.
|
RLMedusa: Reinforcement Learning for Multiple Decoding Heads to Accelerate LLM Inference ( Poster ) > link | Aadit Juneja · Parsa Idehpour 🔗 |
Sat 8:10 p.m. - 9:10 p.m.
|
Matryoshka Quantization ( Poster ) > link | Pranav Nair · Puranjay Datta · Jeff Dean · Prateek Jain · Aditya Kusupati 🔗 |
Sat 8:10 p.m. - 9:10 p.m.
|
Antipodal Pairing and Mechanistic Signals in Dense SAE Latents ( Poster ) > link | Alessandro Stolfo · Ben Wu · Mrinmaya Sachan 🔗 |
Sat 8:10 p.m. - 9:10 p.m.
|
Unveiling Simplicities of Attention: Adaptive Long-Context Head Identification ( Poster ) > link | Konstantin Donhauser · Charles Arnal · Mohammad Pezeshki · Vivien Cabannes · David Lopez-Paz · Kartik Ahuja 🔗 |
Sat 8:10 p.m. - 9:10 p.m.
|
2SSP: A Two-Stage Framework for Structured Pruning of LLMs ( Poster ) > link | Fabrizio Sandri · Elia Cunegatti · Giovanni Iacca 🔗 |
Sat 8:10 p.m. - 9:10 p.m.
|
Compressed sparse tiles for memory-efficient unstructured and semi-structured sparsity ( Poster ) > link | Mike Lasby · Max Zimmer · Sebastian Pokutta · Erik Schultheis 🔗 |
Sat 8:10 p.m. - 9:10 p.m.
|
Symmetric Pruning for Large Language Models ( Poster ) > link | Kai Yi · Peter Richtarik 🔗 |
Sat 8:10 p.m. - 9:10 p.m.
|
PRUNING AS A DEFENSE: REDUCING MEMORIZATION IN LARGE LANGUAGE MODELS ( Poster ) > link | Mansi Gupta · Nikhar Waghela · Sarthak Gupta · Shourya Goel · Sanjif Shanmugavelu 🔗 |
Sat 8:10 p.m. - 9:10 p.m.
|
Scaling Laws and Efficient Inference for Ternary Language Models ( Poster ) > link | Tejas Vaidhya · Ayush Kaushal · Vineet Jain · Francis Couture-Harpin · Prashant Shishodia · Majid Behbahani · Irina Rish · Yuriy Nevmyvaka 🔗 |
Sat 8:10 p.m. - 9:10 p.m.
|
One Must Imagine Experts Happy: Rebalancing Neural Routers via Constrained Optimization ( Poster ) > link | Kushal Thaman 🔗 |
Sat 8:10 p.m. - 9:10 p.m.
|
How can representation dimension dominate structurally pruned LLMs? ( Poster ) > link | Mingxue Xu · Lisa Alazraki · Danilo Mandic 🔗 |
Sat 8:10 p.m. - 9:10 p.m.
|
On multi-token prediction for efficient LLM inference ( Poster ) > link | Somesh Mehra · Javier Alonso Garcia · Lukas Mauch 🔗 |
Sat 8:10 p.m. - 9:10 p.m.
|
Steering Fine Tuning with Targeted Concept Ablation ( Poster ) > link | Helena Casademunt · Caden Juang · Senthooran Rajamanoharan · Neel Nanda 🔗 |
Sat 8:10 p.m. - 9:10 p.m.
|
Understanding the Difficulty of Low-Precision Post-Training Quantization for LLMs ( Poster ) > link | Zifei Xu · Sayeh Sharify · Wanzin Yazar · Tristan Webb · Xin Wang 🔗 |
Sat 8:10 p.m. - 9:10 p.m.
|
Low-Rank is Required for Pruning LLMs ( Poster ) > link | Stephen Zhang · Vardan Papyan 🔗 |
Sat 8:10 p.m. - 9:10 p.m.
|
QJL: 1-Bit Quantized JL Transform for KV Cache Quantization with Zero Overhead ( Poster ) > link | Amir Zandieh · Majid Daliri · Insu Han 🔗 |
Sat 8:10 p.m. - 9:10 p.m.
|
Brain-inspired sparse training enables Transformers and LLMs to perform as fully connected ( Poster ) > link | Yingtao Zhang · Jialin Zhao · Wenjing Wu · Ziheng Liao · Umberto Michieli · Carlo Vittorio Cannistraci 🔗 |
Sat 8:10 p.m. - 9:10 p.m.
|
LoRA Without Forgetting: Freezing and Sparse Masking for Low-Rank Adaptation ( Poster ) > link | Juzheng Zhang · Jiacheng You · Ashwinee Panda · Tom Goldstein 🔗 |
Sat 8:10 p.m. - 9:10 p.m.
|
CAMEx: Curvature-aware Merging of Experts ( Poster ) > link | Dung Viet Nguyen · Minh Nguyen · Luc Nguyen · Rachel Teo · Tan Nguyen · Duy Linh Tran 🔗 |
Sat 8:10 p.m. - 9:10 p.m.
|
The Surprising Effectiveness of Randomness in LLM Pruning ( Poster ) > link | Shuyao Xu · Liu Jiayao · Zhenfeng He · Cheng Peng · Weidi Xu 🔗 |
Sat 8:10 p.m. - 9:10 p.m.
|
Grokking Beyond the Euclidean Norm of Model Parameters ( Poster ) > link | Tikeng Notsawo Pascal Junior · Guillaume Dumas · Guillaume Rabusseau 🔗 |
Sat 8:10 p.m. - 9:10 p.m.
|
Wanda++: Pruning Large Language Models via Regional Gradients ( Poster ) > link |
14 presentersYifan Yang · Kai Zhen · Bhavana Ganesh · Aram Galstyan · Goeric Huybrechts · Markus Müller · Jonas Kübler · Rupak Swaminathan · Athanasios Mouchtaris · Sravan Babu Bodapati · Nathan Susanj · Zheng Zhang · Jack FitzGerald · Abhishek Kumar |
Sat 8:10 p.m. - 9:10 p.m.
|
SpargeAttn: Training-Free Sparse Attention Accelerating Any Model Inference ( Poster ) > link | Jintao Zhang · Chendong Xiang · Haofeng Huang · Jia wei · Haocheng Xi · Jun Zhu · Jianfei Chen 🔗 |
Sat 8:10 p.m. - 9:10 p.m.
|
SPEX: Scaling Feature Interaction Explanations for LLMs ( Poster ) > link | Justin Kang · Landon Butler · Abhineet Agarwal · Yigit Efe Erginbas · Ramtin Pedarsani · Bin Yu · Kannan Ramchandran 🔗 |
Sat 8:10 p.m. - 9:10 p.m.
|
Contextual Sparsity as a Tool for Mechanistic Understanding of Retrieval in Hybrid Foundation Models ( Poster ) > link | Davide Zani · Felix Michalak · Steven Abreu 🔗 |
Sat 9:10 p.m. - 10:10 p.m.
|
Lunch + Mentoring
|
🔗 |
Sat 10:10 p.m. - 10:50 p.m.
|
Panel Discussion
(
Panel
)
>
SlidesLive Video |
Karolina Dzugiate · Pavlo Molchanov · Amir Yazdanbakhsh 🔗 |
Sat 10:50 p.m. - 11:15 p.m.
|
Invited Talk: Shane Bergsma (Cerebras)
(
Invited Talk
)
>
SlidesLive Video |
Shane Bergsma 🔗 |
Sat 11:15 p.m. - 11:30 p.m.
|
Oral #4: Understanding the Difficulty of Low-Precision Post-Training Quantization for LLMs
(
Oral
)
>
link
SlidesLive Video |
Xin Wang 🔗 |
Sat 11:30 p.m. - 12:00 a.m.
|
Invited Talk: Amir Yazdanbakhsh (Google DeepMind)
(
Invited Talk
)
>
SlidesLive Video |
Amir Yazdanbakhsh 🔗 |
Sun 12:00 a.m. - 12:30 a.m.
|
Mentoring + Coffee Break
|
🔗 |
Sun 12:30 a.m. - 1:30 a.m.
|
Mentoring Session
(
Mentoring
)
>
|
Aleksandra Nowak 🔗 |
Sun 1:29 a.m. - 2:30 a.m.
|
Poster Session #2
(
Poster Session
)
>
|
🔗 |
Sun 1:30 a.m. - 2:30 a.m.
|
Efficient Transformers via MPO-Based Low-Rank Factorization and Pruning ( Poster ) > link | Sam Mikhak · Venkata Sai Gummidi · Praneeth Medepalli · Kevin Zhu 🔗 |
Sun 1:30 a.m. - 2:30 a.m.
|
Scalable Continual Learning: Adaptive MoEs for Expanding Task Sets ( Poster ) > link | Adrian Candocia · Omer Inan · Raaghav Agarwal · Aamod Varma · Mark Davenport 🔗 |
Sun 1:30 a.m. - 2:30 a.m.
|
ClusterGen: Token Generation in Sublinear Time and Memory with Clustering KV Cache ( Poster ) > link | Amir Zandieh · Insu Han · Amin Karbasi · Vahab Mirrokni 🔗 |
Sun 1:30 a.m. - 2:30 a.m.
|
EvoPress: Accurate Dynamic Model Compression via Evolutionary Search ( Poster ) > link | Oliver Sieberling · Denis Kuznedelev · Dan Alistarh 🔗 |
Sun 1:30 a.m. - 2:30 a.m.
|
Recovery-on-the-line: Linear trends in post-quantization performance recovery ( Poster ) > link | Shashata Sawmya · Shuvom Sadhuka · Ragulan Sivakumar · Nir Shavit · Dan Alistarh · Bonnie Berger 🔗 |
Sun 1:30 a.m. - 2:30 a.m.
|
Accelerating Transformer Inference and Training with 2:4 Activation Sparsity ( Poster ) > link | Daniel HAZIZA · Timothy Chou · Dhruv Choudhary · Jesse Cai · Luca Wehrstedt · Francisco Massa · Jiecao Yu · Geonhwa Jeong · Supriya Rao · Patrick Labatut 🔗 |
Sun 1:30 a.m. - 2:30 a.m.
|
On the Spatial Structure of Mixture-of-Experts in Transformers ( Poster ) > link | Daniel Bershatsky · Ivan Oseledets 🔗 |
Sun 1:30 a.m. - 2:30 a.m.
|
Sparse and Wide Linear RNNs Are at the Efficiency-Performance Pareto Front ( Poster ) > link | Alessandro Pierro · Steven Abreu · Jonathan Timcheck · Philipp Stratmann · Sumit Shrestha 🔗 |
Sun 1:30 a.m. - 2:30 a.m.
|
Latent Scaling Robustly Identifies Chat-Specific Latents in Crosscoders ( Poster ) > link | Julian Minder · Clément Dumas · Bilal Chughtai · Neel Nanda 🔗 |
Sun 1:30 a.m. - 2:30 a.m.
|
Evaluating LLM Memorization Using Soft Token Sparsity ( Poster ) > link | Zhili Feng · Yixuan Xu · Alexander Robey · Avi Schwarzschild · Zico Kolter 🔗 |
Sun 1:30 a.m. - 2:30 a.m.
|
High Frequency Latents Are Features, Not Bugs ( Poster ) > link | Xiaoqing (Lily) Sun · Josh Engels · Max Tegmark 🔗 |
Sun 1:30 a.m. - 2:30 a.m.
|
Differentiable Attention Sparsity via Structured $D$-Gating ( Poster ) > link | Chris Kolb · Bernd Bischl · David Rügamer 🔗 |
Sun 1:30 a.m. - 2:30 a.m.
|
From Dense to Dynamic: Token-Difficulty Driven MoEfication of Pre-Trained LLMs ( Poster ) > link | Kumari Nishu · Sachin Mehta · Samira Abnar · Mehrdad Farajtabar · Maxwell Horton · Mahyar Najibi · Moin Nabi · Minsik Cho · Devang Naik 🔗 |
Sun 1:30 a.m. - 2:30 a.m.
|
KURTAIL : KURTOSIS-BASED LLM QUANTIZATION ( Poster ) > link | Mohammad Sadegh Akhondzadeh · Aleksandar Bojchevski · Evangelos Eleftheriou · Martino Dazzi 🔗 |
Sun 1:30 a.m. - 2:30 a.m.
|
MoE Lens - An Expert Is All You Need ( Poster ) > link | Marmik Chaudhari · Idhant Gulati · Nishkal Naresh Hundia · Pranav Karra · Shivam Raval 🔗 |
Sun 1:30 a.m. - 2:30 a.m.
|
Joint MoE Scaling Laws: Mixture of Experts Can Be Memory Efficient ( Poster ) > link |
11 presentersJan Ludziejewski · Maciej Pióro · Jakub Krajewski · Michał Krutul · Jan Małaśnicki · Maciej Stefaniak · Piotr Sankowski · Marek Cygan · Kamil Adamczewski · Piotr Miłoś · Sebastian Jaszczur |
Sun 1:30 a.m. - 2:30 a.m.
|
LEWIS (LayEr WIse Sparsity) - A Training Free Guided Model Merging Approach ( Poster ) > link | Hetarth Chopra · Vidhi Rambhia · Vikram Adve 🔗 |
Sun 1:30 a.m. - 2:30 a.m.
|
DeltaMoE: Memory-Efficient Inference for Merged Mixture of Experts with Delta Compression ( Poster ) > link | Boyko Borisov · Xiaozhe Yao · Nezihe Merve Gürel · Ana Klimovic 🔗 |
Sun 1:30 a.m. - 2:30 a.m.
|
QuEST: Training Accurate LLMs over Highly-Compressed Weights and Activation ( Poster ) > link | Andrei Panferov · Jiale Chen · Soroush Tabesh · Roberto Castro · Mahdi Nikdan · Dan Alistarh 🔗 |
Sun 1:30 a.m. - 2:30 a.m.
|
ReALLM: a general framework for LLM compression and fine-tuning ( Poster ) > link | Lisa Bedin · Louis Leconte · Van Minh Nguyen · Eric Moulines 🔗 |
Sun 1:30 a.m. - 2:30 a.m.
|
LogQuant: Log-Distributed 2-Bit Quantization of KV Cache with Superior Accuracy Preservation ( Poster ) > link | CHEN HAN · Zicong Jiang · zining zhang · Bingsheng He · Luo Pingyi · Mian Lu · Yuqiang Chen 🔗 |
Sun 1:30 a.m. - 2:30 a.m.
|
Prefix and Output Length-Aware Scheduling for Efficient Online LLM Inference ( Poster ) > link | Iñaki Arango · Ayush Noori · Yepeng Huang · Rana Shahout · Minlan Yu 🔗 |
Sun 1:30 a.m. - 2:30 a.m.
|
S2-ATTENTION: HARDWARE-AWARE CONTEXT SHARDING AMONG ATTENTION HEADS ( Poster ) > link | Xihui Lin · Yunan Zhang · Suyu Ge · Liliang Ren · Barun Patra · Vishrav Chaudhary · Hao Peng · Xia Song 🔗 |
Sun 1:30 a.m. - 2:30 a.m.
|
InhibiDistilbert: Knowledge Distillation for a ReLU and Addition-based Transformer ( Poster ) > link | Tony Zhang · Rickard Nakamura Brännvall 🔗 |
Sun 1:30 a.m. - 2:30 a.m.
|
Pivoting Factorization: A Compact Meta Low-Rank Representation of Sparsity for Efficient Inference in Large Language Models ( Poster ) > link | Jialin Zhao · Yingtao Zhang · Carlo Vittorio Cannistraci 🔗 |
Sun 1:30 a.m. - 2:30 a.m.
|
Divide, Reweight, and Conquer: A Logit Arithmetic Approach for In-Context Learning ( Poster ) > link | Chengsong Huang · Langlin Huang · Jiaxin Huang 🔗 |
Sun 1:30 a.m. - 2:30 a.m.
|
LoRAM: Low-Rank Adaptation of Large Language Models on Manifold ( Poster ) > link | Xiaowen Jiang · Xun Wang · Sebastian Stich 🔗 |
Sun 1:30 a.m. - 2:30 a.m.
|
Parameters vs FLOPs: Scaling Laws for Optimal Sparsity for Mixture-of-Experts Language Models ( Poster ) > link | Samira Abnar · Harshay Shah · Dan Busbridge · Alaaeldin Ali · Joshua Susskind · Vimal Thilak 🔗 |
Sun 1:30 a.m. - 2:30 a.m.
|
A UNIFIED FRAMEWORK FOR SHAPE PRESERVING COMPRESSION OF LARGE LANGUAGE MODELS ( Poster ) > link | Lawrence Liu · Inesh Chakrabarti · Yixiao Li · Mengdi Wang · Tuo Zhao · Lin Yang 🔗 |
Sun 1:30 a.m. - 2:30 a.m.
|
Post-LoRA Restoration: Utilizing Transferability of Low-Rank Adapter in Quantized Foundation Models ( Poster ) > link | Yuto Kanda · Kenji Hatano 🔗 |
Sun 1:30 a.m. - 2:30 a.m.
|
Sparse Gradient Compression for Fine-Tuning Large Language Models ( Poster ) > link | David Yang · Mohammad Mohammadi Amiri · Tejaswini Pedapati · Subhajit Chaudhury · Pin-Yu Chen 🔗 |
Sun 1:30 a.m. - 2:30 a.m.
|
How Sparse Attention Approximates Exact Attention?Your Attention is Naturally $n^C$-Sparse ( Poster ) > link | Zhao Song · Jing Xiong · Chiwun Yang 🔗 |
Sun 1:30 a.m. - 2:30 a.m.
|
MobiLlama: Towards Accurate & Lightweight Fully Transparent GPT ( Poster ) > link | Omkar Thawakar · Ashmal Vayani · Salman Khan · Hisham Cholakkal · Rao Anwer · Michael Felsberg · Timothy Baldwin · Eric P Xing · Fahad Khan 🔗 |
Sun 1:30 a.m. - 2:30 a.m.
|
Q-Filters: Leveraging Query-Key Geometry for Efficient Key-Value Cache Compression ( Poster ) > link | Nathan Godey · Alessio Devoto · Yu Zhao · Simone Scardapane · Pasquale Minervini · Éric Clergerie · Benoît Sagot 🔗 |
Sun 1:30 a.m. - 2:30 a.m.
|
TASP: Preserving Training Dynamics in Transformers via NTK-Aware Structured Pruning ( Poster ) > link | Mengting Ai · Tianxin Wei · Jingrui He 🔗 |
Sun 1:30 a.m. - 2:30 a.m.
|
ResQ: Mixed-Precision Quantization of Large Language Models with Low-Rank Residuals ( Poster ) > link | Utkarsh Saxena · Sayeh Sharify · Kaushik Roy · Xin Wang 🔗 |
Sun 2:30 a.m. - 3:00 a.m.
|
Breakout Discussions
(
Breakout Session
)
>
|
🔗 |
Sun 2:30 a.m. - 3:00 a.m.
|
Award Ceremony
(
Ending
)
>
|
Ashwinee Panda · Yani Ioannou · Aleksandra Nowak · Mohammed Adnan · Utku Evci 🔗 |
Sun 5:00 a.m. - 3:00 a.m.
|
Social
(
Social
)
>
|
🔗 |