Complete manuscripts may be up to six pages in a standard IEEE two-column format, with the option to include two additional pages for a fee of $150 per page, where the first six pages are free, for a maximum total of eight pages. Authors are required to clearly articulate the significance of their work, highlight novel contributions, and describe its current development status. The submission process follows a double-blind review policy, requiring authors to anonymize their manuscripts to ensure impartial evaluation. Manuscripts that exceed the page limit or fail to adhere to the submission guidelines, including the requirement for double-blind review, will be returned without review to maintain the integrity and fairness of the evaluation process. Scientific papers can be submitted to the following tracks:
This track explores the physical and circuit-level foundations of intelligent computing. Topics include compute-in-memory (CIM/PIM) and near-sensor computing, mixed-signal and analog MAC design, on-chip learning, low-power digital accelerators, and reliability/variation-tolerant architectures. Contributions on heterogeneous 2.5D/3D integration, chiplet/interposer co-design, photonic and quantum devices, and novel materials for AI computing are encouraged.
Keywords: compute-in-memory, PIM/CIM, analog MACs, on-chip learning, heterogeneous integration, chiplets, photonic/quantum AI devices, device reliability.
This track focuses on machine-learning-enabled EDA and cross-layer optimization. Topics include predictive PnR and timing closure, reinforcement-learning-guided design-space exploration, generative circuit synthesis, formal verification and test (ATPG), yield and cost modeling, and runtime telemetry-driven optimization. Submissions combining HW/SW co-synthesis with AI-assisted reliability and power-performance-area (PPA) trade-offs are especially welcome.
Keywords: ML for EDA, learning-guided PnR, timing closure, formal verification, test/ATPG, multi-objective DSE, HW/SW co-synthesis, telemetry optimization.
This track covers architectural innovations for next-generation AI accelerators and systems. Topics include systolic and dataflow architectures, sparsity/quantization support, NoC/interconnect co-design, HBM/HMC hierarchies, RISC-V and chiplet-based SoCs, compiler/runtime co-optimization, and heterogeneous coherence standards (UCIe). Modeling and evaluation frameworks for scalable inference and training are also encouraged.
Keywords: AI accelerators, systolic/dataflow design, RISC-V, chiplet SoCs, memory-centric architectures, NoC/interconnect, HBM/HMC, compiler/runtime co-design.
This track emphasizes AI-native communication and network intelligence across 6G/7G systems. Topics include semantic and goal-oriented communications, federated and in-network learning, O-RAN RIC (xApps/rApps), integrated sensing-communication-compute (ISCC), reconfigurable intelligent surfaces (RIS), and digital twins of networks. Research on self-organizing networks, resource allocation, over-the-air aggregation, and field testbeds is encouraged.
Keywords: AI-native 6G/7G, semantic/goal-oriented comms, federated/in-network learning, RIS/ISAC, O-RAN RIC (xApps/rApps), digital twin networks, self-optimizing networks.
This track addresses AI computing infrastructure and data engineering. Areas include data pipelines, feature stores, vector databases, serverless AI frameworks, LLM serving and inference optimization, virtualization, and sustainability-aware resource management across cloud-edge continua. Work on digital twins for datacenter operations and blockchain-based data provenance is also welcome.
Keywords: cloud/edge orchestration, data engineering, vector DBs, MLOps/LLMOps, model serving, autoscaling/SLA, digital twins, blockchain provenance, sustainability.
This track explores the convergence of the Internet of Things (IoT), edge intelligence, and cyber-physical systems (CPS) for smart, connected, and autonomous environments. It focuses on intelligent sensing and actuation, embedded and edge AI, TinyML, real-time analytics, and time-sensitive networking (TSN) for low-latency and deterministic communication. Topics also include hardware-in-the-loop (HIL) simulation, functional safety, edge–cloud orchestration, and large-scale interoperability across industrial, urban, and critical infrastructure systems. Submissions addressing digital twins, self-adaptive IoT architectures, industrial wireless technologies, or next-generation IoT protocols are particularly encouraged.
Keywords: IoT, cyber-physical systems, edge intelligence, TinyML, smart sensing and actuation, real-time analytics, TSN, HIL, edge–cloud orchestration, interoperability, digital twins, adaptive IoT, industrial IoT, IoT security, smart infrastructure, connected systems.
This track unifies hardware and AI security for trustworthy intelligent systems. Topics include side-channel and fault-injection attacks, model watermarking and supply-chain security, red-teaming and safety cases, privacy-preserving learning, and runtime attestation of edge or cloud AI. Verification, certified robustness, and governed model lifecycles are also in scope.
Keywords: hardware security, AI security, model watermarking, supply-chain assurance, adversarial robustness, privacy (e.g., FL/DP/HE/MPC), runtime attestation, certified robustness.
This track covers autonomous and agentic AI in both software and embodied forms. Topics include SLAM, 3D perception, manipulation and grasping, task and motion planning (TAMP), reinforcement learning for control, shared autonomy, multi-robot coordination, and sim-to-real transfer. Submissions on safety-critical deployment, benchmarks, and field validation are encouraged.
Keywords: agentic AI, robotics, autonomy, SLAM, TAMP, shared autonomy, multi-robot coordination, sim-to-real, safety validation.
This track spans core algorithmic and model innovations in AI. Topics include foundation and multimodal models (LLMs, VLMs), retrieval-augmented generation (RAG), parameter-efficient fine-tuning (LoRA/QLoRA), reinforcement learning (RL/RLHF/RLAIF), generative design, neurosymbolic reasoning, and cognitive architectures. Work on model efficiency, alignment, verification, and continual learning is encouraged.
Keywords: foundation models, LLMs/VLMs, RAG, LoRA/QLoRA, RL/RLHF/RLAIF, generative design, neurosymbolic reasoning, continual learning, alignment, verification.
This track examines human–AI interfaces and decision support systems. Areas include explainable interfaces, uncertainty and provenance visualization, XR/AR/VR for operations and training, conversational UIs, and human-in-the-loop evaluation with cognitive load and ergonomic analysis. Contributions on trust calibration and safety UX are welcome.
Keywords: human-centered AI, explainability, interpretability, uncertainty visualization, XR/AR/VR, provenance UX, human-in-the-loop, trust and safety UX.
This track addresses fairness, accountability, and sustainability in AI lifecycles. Topics include risk management, impact assessment, auditing and reporting (model cards/datasheets), governance and policy frameworks, safety certification, and environmental impact analysis of training and inference. Submissions aligned with NIST AI RMF principles are encouraged.
Keywords: responsible AI, fairness, transparency, risk management, impact assessment, model cards/datasheets, data governance, sustainability metrics, policy compliance.
This track focuses on AI-driven applications and vertical domains that demonstrate real-world impact. Topics include digital medicine and healthcare, Industry 4.0 and predictive manufacturing, process control (fabs/chemicals), smart energy and mobility, grid intelligence, logistics and ports, urban digital twins, and resilient infrastructure. Submissions highlighting cross-domain integration and sustainability are encouraged.
Keywords: applied AI, AIoT, digital medicine, smart and connected health, Industry 4.0, predictive maintenance, process control, energy grids, mobility, logistics, smart cities, smart agriculture, resilience, sustainability.
This special session focuses on emerging enabling technologies that drive the convergence of the Internet of Things (IoT) and Artificial Intelligence (AI), with particular emphasis on embodied, physical, and agentic intelligence in real-world systems. In addition to foundational advances in hardware, software, and system architectures, the session highlights how these technologies translate into impactful vertical applications across domains such as smart cities, healthcare, industrial automation, robotics, and intelligent infrastructure. Topics include edge and embedded AI, distributed and multi-agent systems, large-scale AI models, and next-generation communication technologies. The session aims to bring together researchers and practitioners to explore novel enablers that support scalable, adaptive, secure, and trustworthy IoT–AI ecosystems in diverse real-world environments.
Keywords: Emerging & Enabling Technologies, Embodied AI, Physical AI, Agentic AI, Edge AI / Edge Intelligence, Embedded AI / TinyML, Large Language Models (LLMs), Mixture-of-Experts (MoE), Distributed & Multi-Agent Systems, Cyber-Physical Systems, Digital Twins, AI Hardware Acceleration (NPUs, ASICs), Low-Power & Energy-Efficient Hardware, 5G/6G Connectivity, Vertical Applications (Smart Cities, Healthcare, IIoT)
This special session focuses on emerging blockchain and distributed ledger technologies as key enablers for secure, decentralized, and trustworthy IoT and AI systems. It explores how blockchain can address critical challenges such as data integrity, trust management, decentralized coordination, and secure data sharing across heterogeneous and large-scale IoT–AI ecosystems. The session also highlights integration with edge computing, AI-driven automation, and smart contracts, as well as applications in domains such as smart cities, supply chains, healthcare, and industrial IoT. Contributions on novel architectures, scalability, interoperability, and privacy-preserving mechanisms are particularly encouraged.
Keywords: Blockchain, Distributed Ledger Technologies (DLT), Smart Contracts, Decentralized Systems, Trust Management, Secure Data Sharing, IoT Security, AI Integration, Edge-Blockchain Integration, Privacy-Preserving Mechanisms, Scalability, Interoperability, Supply Chain, Smart Cities, Industrial IoT (IIoT)
This special session focuses on emerging technologies that enable the convergence of Edge AI and Generative AI, with particular emphasis on on-device large language models (LLMs) and intelligent systems. As AI capabilities move from cloud-centric infrastructures to edge and embedded platforms, new challenges arise in efficiency, latency, scalability, and privacy. The session covers a broad spectrum of edge intelligence, ranging from ultra-low-power TinyML systems to resource-aware deployment of generative and foundation models on mobile, embedded, and edge devices. Topics include model compression and quantization, Mixture-of-Experts (MoE), efficient inference, distributed edge–cloud collaboration, and hardware–software co-design. The session also highlights vertical applications in wearables, robotics, smart infrastructure, and industrial IoT, aiming to advance scalable, efficient, and trustworthy AI at the edge.
Keywords: Edge AI, Generative AI, On-Device LLMs, Foundation Models, Multimodal Models, TinyML, Model Compression and Quantization, Mixture-of-Experts (MoE), Efficient Inference, Distributed Edge–Cloud Intelligence, AI Hardware Acceleration, Low-Power AI Systems, Privacy-Preserving AI, Embedded and Mobile AI, Vertical Applications