Skip to main content

Granted Scholarships

Read more about Grants and Funding here

PhD fellowship under the Danish Quantum Algorithm Academy

University of Copenhagen

Stephan Paul Albert Sauer

Quantum Chemistry has become a forerunner for more general applications of quantum computers and several studies have employed even the current noisy intermediate-scale (NISQ) quantum computers for quantum chemical calculations. However, all quantum chemistry studies on quantum computers have so far been concentrated on non-relativistic quantum chemistry and thus only on the light elements in the periodic table. This is a problem as heavier elements such as e.g. Ru, Os, Pt, Pd, Cd, Pb and Hg play an essential role for catalysis, for attempts to solve the energy crisis through sunlight exploitation and light harvesting, for cancer treatment or are the source of serious heavy metal contamination of the environment. The effectiveness of new catalysts, anticancer drugs, or metal ion recognition compounds for the remediation of heavy metal contamination hinges on their structural configuration, particularly around heavy elements. Quantum chemical calculations are a frequently selected way to get the necessary information about the structure of such compounds. However, precise calculations for heavy elements are challenging due to their strong electronic correlation and significant relativistic effects, necessitating the use of the relativistic Dirac equation and multi-configurational methods, which are several limited on classical computers. Quantum computers offer here a promising avenue for addressing this challenges since multi-configurational methods can scale polynomially on quantum machines. However, quantum computer algorithms for quantum chemical calculations based on the relativistic Dirac equation have not yet been presented. The current project will therefore develop accurate relativistic quantum chemistry methods for calculations on current noisy intermediate-scale quantum (NISQ) computers based on our recent algorithmic development for non-relativistic quantum chemical computing approaches or current noisy intermediate-scale (NISQ) quantum computers.

University of Copenhagen

Daniel Stilck França

Combinatorial optimization problems like MaxCut or the Traveling Salesman problem are both of central importance for mathematics and computer science and in various industrial applications, such as various scheduling or protein folding problems. Thus, it should come as no surprise that designing quantum algorithms that offer an (potentially exponential) advantage over classical algorithms has been the subject of intense research over the past decades. Indeed, whole algorithmic frameworks and platforms were developed for this sole purpose, such as quantum annealers. Despite these efforts, it remains unclear to what extent existing quantum algorithms offer significant speedups for combinatorial optimization. Recently, a novel algorithmic framework, Quantum Digital Interferometry (DQI), was introduced by Jordan et al to tackle certain combinatorial optimization problems. It comes with provable performance guarantees and exploits a novel connection between quantum algorithms for combinatorial optimization and classical linear codes. It holds great promise to finally give quantum speedups for certain instances of these central problems; in the words of the prominent quantum researcher Scott Aaronson, "exponential quantum speedups for NP complete problems, for real this time?!". However, in spite of this great promise, further research is required to adapt the algorithm to industrially relevant instances. In particular, it is unclear how to impose constraints or consider weighted instances of combinatorial problems like MaxCut. Thus, the focus of this PhD project will be to develop approaches to the DQI algorithm that can handle both weighted or constrained combinatorial optimization problems. After that, the student will investigate its performance on various industrially motivated problem to identify for which problems it holds the most promise.

Aarhus University

Ove Christiansen

This project focuses on developing advanced computational methods to calculate molecular partition functions, a critical step in understanding chemical behavior. Partition functions are the foundation of quantum statistical mechanics and the key to determining molecular free energies. Free energies, in turn, dictate the stability and spontaneity of molecular processes, such as whether a reaction will occur. By combining energy-related (enthalpic) and configurational (entropic) contributions, free energies provide a comprehensive picture of molecular systems. Accurate partition function calculations are essential for understanding processes like chemical reactions, protein folding, and drug binding. The proposed approach uses quantum computing to compute contributions to the partition function from internal molecular motions, overcoming the limitations of classical methods. Ideally, these calculations should be based on quantum statistical mechanics, where the system’s quantum states are partially occupied depending on temperature. However, the main challenge is the exponential increase in thermally accessible states and the growing complexity of calculating them as the system size expands. This project addresses this challenge by transforming the problem into a quantum formulation that can be mapped onto qubits, replacing traditional sampling with direct quantum computations of molecular properties. This high-risk, high-reward project demands major advancements in theoretical models and quantum algorithms. While quantum computing is not yet competitive for solving chemical problems, its rapid development holds immense promise. As quantum computers become more powerful, they could enable highly accurate and efficient molecular simulations. This would revolutionize computational chemistry, making quantum calculations of partition functions a central tool for designing molecules, drugs, and materials with specific properties.

Aarhus University

Jan Cornelis van de Pol

The QoptiQ project aims for automated optimization of quantum circuits. Current quantum computers are noisy, which forms the main bottleneck for serious applications. Optimizing quantum circuits is essential to enable early applications of quantum computing, since it reduces the noise. On the longer term, for fault-tolerant quantum computing, quantum-circuit optimization is even more important, since it reduces the overhead for quantum error correction. This improves current quantum compilers, which apply some circuit reduction, but their heuristics are far from optimal. The QoptiQ project will develop a flexible framework to study provably optimal quantum-circuits, and invent algorithms to compute those: It will address several optimization metrics, including number of gates and circuit depth; it will allow for liberal forms of quantum-circuit equivalence, like equality modulo qubit permutation or relative phase; it will also aim at optimality for particular quantum platforms, with limited qubit connectivity. The main scientific challenge will be the scalability of the algorithms to compute optimal quantum circuits. We will study the problem for various quantum gate sets. We will develop a unique methodology, combining algebraic insights (group theory) to exploit symmetries, the use of state-of-theart constraint solvers (like SAT solvers and planning tools), and novel high-performance, parallel algorithms and concurrent data-structures. The results of the project will be validated both in quantum-circuit simulation tools and on actual quantum computers. We will compare the distributions from simulating an ideal quantum platform versus a quantum platform with a noise model. We will also measure the increased accuracy of actual quantum computations, due to using optimal quantum circuits. When successful, our research will enable larger quantum applications on given quantum hardware, for instance to find the ground state of larger molecules.

University of Southern Denmark

ørgen Ellegaard Andersen, Shan Shan

The Algorithms, Advantage and Applications of Gaussian Boson Sampling Gaussian Boson Sampling (GBS) is a special-purpose photonic quantum computer with proven theoretical advantages and demonstrated experimental realizations. Solving Gaussian expectation (GE) problems using GBS has shown to significantly outperform standard Monte Carlo (MC) method. Given the ubiquity of GE problems in real-world applications, this approach offers a promising path toward achieving quantum usefulness in the near term. This project aims to advance the practical application of GBS by addressing key challenges, including hardware noise, limited scale, and the lack of practical use cases. To achieve this, we will: (1) Develop new algorithms and theoretical tools to model and mitigate errors and noise in GBS hardware (2) Extend GBS algorithms to other quantum computing platforms, enhancing their applicability and adaptability to emerging hardware technologies. (3) Develop and implement a suite of use cases for the theory and algorithms developed in (1) and (2), testing them across various quantum computing platforms to demonstrate their feasibility and effectiveness. These efforts will bridge the gap between the theoretical promise of GBS and its practical implementation, paving the way for impactful quantum applications.

Aarhus University and Kvantify ApS

Rasmus Berg Jensen, Frank Jensen, Hans Henrik Knudsen, Nikolaj Zinner

This project combines the development of a hierarchy of physics-based energy models, Force Fields (FFs), with quantum computing in order to reduce the computational time of performing accurate molecular-dynamics simulations. The project is a collaboration between Kvantify ApS (industrial partner), which develops drug-design software aided by quantum computing for pharmaceutical and biotech companies, and Aarhus University (academic partner), represented by the research group directed by Associate Professor Frank Jensen (FJ) at the Dept. of Chemistry, who runs a project on developing a hierarchy of FFs with systematic increases in accuracy. The central aim is to investigate whether quantum computing can significantly reduce the time it takes to solve the electric polarization response, which is the main computational bottleneck for high accuracy FFs.

In the first scholarships calls a total of 21 PhD and 17 Postdoc applications were received. 12 of these were granted, and you can read more about them below.

Research projects by PhD Scholarship recipients

University of Southern Denmark

Jacob Kongsted, Peter Reinholdt, and Erik Rosendahl Kjellgren

Quantum chemistry is one of the scientific disciplines foreseen to be significantly impacted by the introduction of quantum computers. However, several challenges need to be overcome before quantum chemistry can really benefit from the opportunities brought by quantum computers. In this project we will focus on one of the factors limiting real quantum chemistry applications when implemented on quantum computers, namely lack of development of efficient algorithms for error mitigation strategies. Based on our recent algorithmic development for quantum chemical computing approaches to the calculation of molecular properties, we suggest a novel and efficient error mitigation strategy designed for current noisy intermediate-scale (NISQ) quantum computers. Recently, we have performed a pilot implementation of our novel approach to error mitigation (unpublished), and very encouraging initial results showed that our approach leads to superior performance compared to other related error mitigation strategies. The current research project will investigate several aspects of this novel approach, including robustness, scalability, performance of the error mitigation strategy across different quantum computing architectures, and synergistic effects obtained from combining our suggested error mitigation strategy with other previously published approaches. Our goal is to establish the suggested error mitigation approach as the "standard" error mitigation strategy broadly used for quantum computing chemistry, thereby making a quantum leap in realizing accurate and scalable quantum chemical computing based on the opportunities brought by quantum computing.

Aalborg University

Kim Guldstrand Larsen, Christian Schilling, and Max Tschaikowski

We study the problem of equivalence checking between two quantum circuits. This scenario is motivated by the compilation process of a high-level quantum circuit down to a low-level quantum circuit that can be implemented on a concrete quantum computer. Since current quantum computers severely minimize the circuit depth. It is evident that these optimizations must preserve functional equivalence of the circuit. However, the problem of deciding equivalence between quantum circuits is known to be hard.

In this project, we will investigate the problem from a new angle based on tensor decision diagrams (TDDs). TDDs promise to combine the strengths of two highly successful research directions from literature: tensor networks, which offer low-dimensional reasoning, as well as decision diagrams, which offer efficient encoding. However, to apply TDDs for equivalence checking, we must overcome several challenges. First, we need an efficient implementation with strong parallelization to effectively make use of high-performance computers. Second, the main operation on TDDs - contraction - needs to be executed in a certain order following a so-called contraction plan. While each such plan will lead to the same result, different plans lead to different intermediate representations, some of which may become prohibitively large. Thus, it is crucial to select an efficient contraction plan. However, it is currently unknown how this choice can be made. To this end, we will employ machine leaming to find efficient contraction plans for TDDs. Finally, we will demonstrate the developed tools in case studies certifying quantum compilations. Overall, this project will accelerate the validation of quantum compilers, which has immediate benefits for quantum computing across application domains.

University of Copenhagen

Matthias Christandl

Quantum communication through noisy quantum channels is possible with the help of encoding and decoding procedures. The design of the encoding and decoding procedures falls under the field of quantum Shannon theory. In quantum Shannon theory, it is often assumed that the encoding and decoding procedures can be implemented with noise free gates. However, this assumption does not hold true in practice as quantum computers are inherently noisy. Recently, Christandl and Müller-Hermes {Christandl and Müller-Hermes, IEEE Trans. Inf. Th. 70, 282 (2024)), and later together with Belzig (Belzig, Christandl and Müller-Hermes, IEEE Trans. Inf. Th. (2024)) established a high-level theory of fault-tolerant quantum communication, which also accounts for the noise in the realization of encoders and decoders. This has been done by combining fault-tolerant computation with the quantum Shannon theory, with the help of so-called "interfaces" This PhD thesis aims to extend the fault-tolerant quantum communication for practical codes, such as surface and colour codes, which are promising for fault tolerant quantum computing due to their planar layout with nearest neighbour connectivity. The fault tolerant communication will be analysed against practical noise models such as amplitude and phase damping, and coherent errors. Finally, some fundamental questions will also be explored, for example, what is the threshold value of the fault-tolerant quantum communication with practical codes? What is the fault tolerant capacity and the rate of communication? Whether it is possible to achieve threshold in a uniform way such that it does not depend on the quantum communication channel.

University of Copenhagen

Michael James Kastoryano

The development of practical, fault tolerant quantum algorithms remain THE critical challenge for the success of the field. The Quantum Metropolis (qMet) algorithm, which I and my collaborators have recently developed stands as an especially promising candidate in this direction. By engineering the preparation of thermal states of quantum many-body Hamiltonians, qMet mirrors the classical Metropolis algorithm's utility in simulating materials, molecules, and optimization tasks. Previous attempts at quantum Metropolis algorithms faced significant challenges lacking mathematical rigor and practicality. Our work overcomes these hurdles, providing a foundation for practical applications. This PhD project aims to delve into qMet's practical applications, focusing on resource estimation, simulations of materials, and optimization tasks. The Metropolis algorithm has been pivotal in simulating complex systems across various disciplines, leveraging system configurations to explore statistical distributions. The quantum version, qMet, promises similar revolutionary impacts, especially in quantum simulations and potentially in optimization and quantum inference. The project will tackle key challenges such as resource estimation for practical quantum advantage and exploring qMet's efficacy in simulating quantum many-body systems and optimizing classical problems. The feasibility of qMet for simulating ground states and its application in thermal annealing and dynamical systems will also be investigated. This includes a novel approach to optimization that leverages quantum dynamics, aiming to surpass classical and quantum annealer efficiency.

University of Copenhagen

Laura Mancinska

Leaming properties of physical systems is an essential task across various scientific disciplines. Given that quantum states cannot be observed directly, understanding their properties introduces unique challenges. The core focus of this project is on creating new quantum subroutines and algorithms to tackle three fundamental quantum leaming problems: tomography, spectrum estimation, and purification. For tomography and spectrum estimation, our objective is to derive a classical description. In contrast, for the purification problem, we aim to produce a purified version of a quantum state. Our three focus problems are well known and studied. A key quantity of interest is the number of copies of the quantum state needed to learn the desired quantity. A novel aspect of our project is the investigation of these three problems within a model that allows for the retention of quantum memory across different measurements-a concept that, despite its natural appeal, has been underexplored, partly because proving lower bounds in this context is challenging. Another reason this model has remained less explored could be that in current experiments it is challenging to perform mid-circuit measurements. The aim of this research is to understand and quantify the benefits of retaining quantum memory between measurements and to explore the trade-offs between sample and memory complexity in quantum algorithms. This could pave the way for new, more efficient algorithms for quantum learning. tasks, with broader implications for quantum computing and information theory.Leaming properties of physical systems is an essential task across various scientific disciplines. Given that quantum states cannot be observed directly, understanding their properties introduces unique challenges. The core focus of this project is on creating new quantum subroutines and algorithms to tackle three fundamental quantum leaming problems: tomography, spectrum estimation, and purification. For tomography and spectrum estimation, our objective is to derive a classical description. In contrast, for the purification problem, we aim to produce a purified version of a quantum state.

University of Copenhagen

Albert Helmut Werner and Daniel Malz

Motivated by branch prediction in classical computing, this PhD project explores branch prediction in the context of quantum computing. Every modem computer chip contains branch predictors - circuits that try to guess the outcome of conditional statements and switches (i.e., which branch of the computation must be done next). This allows the chip to pre-load larger chunks of the program. If the prediction was correct, this can save a tremendous amount of time. If it was wrong, the computed result is discarded, and the actual branch is executed. In Rydberg tweezer arrays, currently perhaps the most advanced quantum computing platform, there is a significant difference (about an order of magnitude) between gate times and measurement time. Since operations atter a measurement depends on the measurement outcome, the computer must idle for significant amounts of time until the measurement result becomes available and the computation can proceed, which introduces error. A simple back-of-the-envelope calculation shows that branch prediction may lead to significant speed-ups, but there are many open questions, both of practical and fundamental nature, which are the topic of the proposed PhD project. The first goal will be to study this new paradigm in fundamental quantum circuits: those used for active error correction and the quantum singular value transformation. Both are fundamental operations that employ measurements and conditional operations afterwards and run extensively as subroutines of larger algorithms such as phase estimation. Thus, speeding up these routines would lead to large and general applicable benefits. The goal will be to establish a theory of quantum branch prediction complete with the basic principle, key examples, and resource estimates on realistic platforms.

Postdoc Scholarship recipients

University of Southern Denmark

Ernst Dennis Larson

Studying solid-state materials, such as catalysts, solar cells, and batteries, is crucial for various industries. Understanding the electronic structure of such materials is essential for improving existing ones and predicting new ones. Quantum mechanical methods, like density functional theory with periodic boundary conditions, are commonly used for such analyses. However, these methods are not well-suited for strongly correlated systems where multiconfigurational (MC) wave function methods are required. MC methods have recently seen a lot of development on quantum computers due to their promise to reduce the exponential scaling of such methods to a polynomial one. These holds promise in targeting ever-larger systems, such as solids, with multiconfigurational methods. This proposal aims to integrate advancements in the molecular variational quantum eigensolver algorithm with solid-state modelling techniques by porting and evaluating the performance of embedded cluster methods on NISQ devices. The project will initially enable the PDE-X model for the solid-state, and the results will be used to generate benchmark data for quantum applications. This will be used to verify the implementation of quantum simulators and as a part of the developed noise error-mitigation strategy. Finally, the resulting algorithms will be applied to real problems in the solid state, such as transition metal-doped metal oxides. On this basis this study aims to be a pilot study highlighting the impact provided by quantum algorithmic development, solid-state modelling, and related fields.

University of Calgary

Thomas Theurer

Quantum communication and cryptography in realistic near-term settings. Recently, quantum communication has seen rapid progress and is expected to see widespread usage soon. However, there are still many open questions concerning optimal protocols in realistic settings that take noise and finite-copy effects into account. Using techniques from quantum information theory, quantum resource theories, and convex optimization, this project will address them. A highly relevant task in quantum communication Is quantum key distribution, which allows for provably secure communication. Since in practice, noise accumulates with increasing communication distance, long-distance quantum key distribution will need intermediate repeaters. While in the asymptotic setting of an unbounded number of channel-uses, much is known about the quantum key repeater capacity, less is known in the more realistic single-shot regime. By determining the single-shot quantum key repeater capacity and the protocol that achieves it with both trusted and untrusted devices, I will find optimal long-distance quantum protocols for secure communication with near-term quantum technology. Another central problem in quantum communication is the transmission of messages via noisy communication channels. To achieve both a high transmission rate and a high probability of correctly transmitting the message, the sender encodes it in a way that makes the message resilient against the channel's noise, and the receiver decodes it to guess the message. Typically, it is assumed that both the encoding and decoding are error-free, which is unrealistic. I will address this shortcoming by finding bounds on fault tolerant communication rates. Combining these results, I will determine the fault tolerant quantum key repeater capacity, which is a milestone for harnessing the full power of quantum technology.

University of Porto

Nahid Binandel Dehaghavi

This proposal aims to develop quantum-driven solutions for multi-agent systems and advanced computation by designing new architectures for Quantum Physics-Informed Neural Networks (QPINNs) and integrating hybrid quantum-classical techniques. These approaches will address complex mathematical challenges, including partial differential equations (PDEs) such as Hamilton-Jacobi-Bellman, Fokker-Planck, and Navier-Stokes equations, as well as large-scale nonlinear differential equations and optimization problems. Our objective is to develop robust QPINN algorithms that outperform classical methods by significantly enhancing computational efficiency through the power of quantum and supercomputing technologies. This involves conducting comprehensive performance evaluations and incorporating advanced error mitigation techniques to ensure consistent and reliable outcomes. Our objectives extend to applying these quantum solutions within a multi-agent system framework to tackle traffic and urban mobility. Specifically, the project will develop innovative game-theoretical models and optimal control approaches within a multi-agent-based decision support system to optimize resource use and ensure efficiency of mobility, thereby reducing environmental impact and promoting sustainability. By advancing the quantum-based proposed approaches, the project aims to demonstrate the scalability, adaptability, and reliability of quantum-enhanced technologies, transitioning from theoretical concepts to practical real-world applications. This comprehensive approach will not only highlight the capabilities of quantum computing in various sectors but also accelerate its adoption to solve key societal challenges, establishing a standard for future quantum computing projects.

University of Copenhagen

Frederik Nathan

LEAP: a fault-tolerant quantum Metropolis algorithm implementable on near-term hardware. With this project I shall develop a resource-efficient and fault-tolerant quantum Metropolis algorithm, termed the Local Energy Attenuation Protocol (LEAP) algorithm. The algorithm can be used for A. Quantum optimization. B. High-fidelity simulation of dissipative quantum processes in nature (dissipative dynamics, nonequilibrium steady states, and thermal states). These applications are highly relevant within industry (logistics, network design), finance (portfolio optimization), sustainability, and life sciences (efficient high-fidelity of quantum processes in physics and chemistry). The LEAP algorithm is based on a new approach I developed to efficiently describe quantum diffusion, termed the universal Lindblad approximation (ULA). The ULA formulates simple quasi-local quantum jump process, enabling conventional simulation of quantum diffusion in systems of scales beyond the range of historical approaches The ULA is gaining widespread use, with several recent high-profile applications in recent years. The LEAP algorithm realizes the full potential of the ULA by implementing it in a quantum Monte Carlo algorithm, relevant for Gibbs sampling, quantum simulation, and optimization. The LEAP algorithm is highly-resource efficient and fault tolerant, meaning and can potentially be implemented on near-term hardware, which will be imperfect, and resource limited. Specifically, the LEAP update step involves only a quasilegal reconfiguration of qubits, making the algorithm efficient and fault tolerant, with limited, size-independent requirements on gate circuit depth and width. With the project I shall 1. Develop the LEAP algorithm from the ULA diffusive process, 2. Estimate resource cost and establish fault tolerance, 3. Demonstrate the LEAP algorithm on prototypical optimization and simulation problems, and 4. Further improve fidelity of LEAP algorithm for simulations of dissipative quantum processes.

University of Copenhagen

Manaswi Paraashar

Quantum computing's potential is evident in algorithms that outperform their classical counterparts in critical computational tasks. However, practical implementation remains elusive due to the absence of large-scale quantum computers. The primary obstacle lies in the noise susceptibility of qubits, necessitating, error mitigation strategies. This project focuses on studying the effects of noise on quantum algorithms and designing error-reducing algorithms. The first objective of the project is computing equivariant Boolean functions over noisy inputs, crucial for various computational scenarios. We aim to design efficient and optimal algorithms for one-sided error reduction. In addition to its significant theoretical and practical aspects, such an algorithm would also shed light on computing the class of symmetric equivariant Boolean functions. The second objective of the project delves into the effect of noise on quantum query complexity. We aim to investigate the effects of reversible and irreversible noise on quantum query algorithms based on discrete quantum walks (e.g., Grover's algorithms) and quantum algorithms for the Hidden Subgroup Problem (e.g., Shor's factoring algorithm). The project also aims to design robust quantum query algorithms resilient to noise, crucial for implementing these algorithms on real quantum computers. Overall, this project aims to advance the understanding of noise in quantum systems and develop algorithms resilient to its effects, facilitating the realization of practical quantum computing applications.

University of Southern Denmark

Gard Olav Helle

This project aims to develop and test quantum algorithms for specific applications in the field of weather and climate predictions. In close collaboration with domain experts from the Danish Meteorological Institute, we aim to determine ideal numerical methods for efficient implementation in a quantum computer. An overall objective is to develop methods allowing us to simulate the shallow water equations, an important benchmark test in the field of weather prediction, on quantum hardware. Initial investigations will build up to this by focusing on more basic systems of partial differential equations relevant in weather and climate. Taking advantage of the applicant and mentors’ expertise in advanced mathematics related to gauge theory and quantum mathematics, the project seeks to explore more theoretical methods building on the Madelung transform; a method recasting Navier-Stokes type equations in the shape of a Schoedinger equation. We intend to extend these methods to apply in more realistic real-world applications and test them on appropriate quantum hardware. Taking advantage of the unique competency at the Center of Quantum Mathematics at SDU, the project aims to explore quantum computational methods outside of the standard quantum circuit model. More precisely, in the setting of topological quantum computing, intimately linked to topological quantum field theory, we seek to develop algorithms in the mathematical language of braids native to this theory. As topological quantum computing is inherently resilient to noise, this is a natural setting for achieving fault-tolerance, which will eventually be needed for the ambitious application in this project.

Revised
11 Sep 2025