Publikationer Institutionen för systemteknik
DatorseendeDatorteknik
Elektroniska Kretsar och System
Fordonssystem
Informationskodning
Kommunikationssystem
Reglerteknik
Senaste doktorsavhandlingarna
Using sensors to observe real-world systems is important in many applications. A typical use case is target tracking, where sensor measurements are used to compute estimates of targets. Two of the main purposes of the estimates are to enhance situational awareness and facilitate decision-making. Hence, the estimation quality is crucial. By utilizing multiple sensors, the estimation quality can be further improved. Here, the focus is on target tracking in decentralized sensor networks, where multiple agents estimate a common set of targets. In a decentralized context, measurements undergo local preprocessing at the agent level, resulting in local estimates. These estimates are subsequently shared among the agents for estimate fusion. Sharing information leads to correlations between estimates, which in decentralized sensor networks are often unknown. In addition, there are situations where the communication capacity is constrained, such that the shared information needs to be reduced. This thesis addresses two aspects of decentralized target tracking: (i) fusion of estimates with unknown correlations; and (ii) handling of constrained communication resources.
Decentralized sensor networks have unknown correlations because it is typically impossible to keep track of dependencies between estimates. A common approach in this case is to use conservative estimators, which can ensure that the true uncertainty of an estimate is not underestimated. This class of estimators is pursued here. A significant part of the thesis is dedicated to the widely-used conservative method known as covariance intersection (CI), while also describing and deriving alternative methods for CI. One major result related to aspect (i) is the conservative linear unbiased estimator (CLUE), which is proposed as a general framework for optimal conservative estimation. It is shown that several existing methods, including CI, are optimal CLUEs under different conditions.
A decentralized sensor network allows for less data to be communicated compared to its centralized counterpart. Yet, there are still situations where the communication load needs to be further reduced. The communication load is mostly driven by the covariance matrices since, in this scope, estimates and covariance matrices are shared. One way to reduce the communication load is to only exchange parts of the covariance matrix. To this end, several methods are proposed that preserve conservativeness. Significant results related to aspect (ii) include several algorithms for transforming exchanged estimates into a lower-dimensional subspace. Each algorithm corresponds to a certain estimation method, and for some of the algorithms, optimality is guaranteed. Moreover, a framework is developed to enable the use of the proposed dimension-reduction techniques when only local information is available at an agent. Finally, an optimization strategy is proposed to compute dimension-reduced estimates while maintaining data association quality.
@phdthesis{diva2:1811376,
author = {Forsling, Robin},
title = {{The Dark Side of Decentralized Target Tracking:
Unknown Correlations and Communication Constraints}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 2359}},
year = {2023},
address = {Sweden},
}
Suppose data-driven black-box models, e.g., neural networks, should be used as components in safety-critical systems such as autonomous vehicles. In that case, knowing how uncertain they are in their predictions is crucial. However, this needs to be provided for standard formulations of neural networks. Hence, this thesis aims to develop a method that can, out-of-the-box, extend the standard formulations to include uncertainty in the prediction. The proposed method in the thesis is based on a local linear approximation, using a two-step linearization to quantify the uncertainty in the prediction from the neural network. First, the posterior distribution of the neural network parameters is approximated using a Gaussian distribution. The mean of the distribution is at the maximum a posteriori estimate of the parameters, and the covariance is estimated using the shape of the likelihood function in the vicinity of the estimated parameters. The second linearization is used to propagate the uncertainty in the parameters to uncertainty in the model’s output. Hence, to create a linear approximation of the nonlinear model that a neural network is.
The first part of the thesis considers regression problems with examples of road-friction experiments using simulated and experimentally collected data. For the model-order selection problem, it is shown that the method does not under-estimate the uncertainty in the prediction of overparametrized models.
The second part of the thesis considers classification problems. The concept of calibration of the uncertainty, i.e., how reliable the uncertainty is and how close it resembles the true uncertainty, is considered. The proposed method is shown to create calibrated estimates of the uncertainty, evaluated on classical image data sets. From a computational perspective, the thesis proposes a recursive update of the parameter covariance, enhancing the method’s viability. Furthermore, it shows how quantified uncertainty can improve the robustness of a decision process by formulating an information fusion scheme that includes both temporal correlational and correlation between classifiers. Moreover, having access to a measure of uncertainty in the prediction is essential when detecting outliers in the data, i.e., examples that the neural network has yet to see during the training. On this task, the proposed method shows promising results. Finally, the thesis proposes an extension that enables a multimodal representation of the uncertainty.
The third part of the thesis considers the tracking of objects in image sequences, where the object is detected using standard neural network-based object detection algorithms. It formulates the problem as a filtering problem with the prediction of the class and the position of the object viewed as the measurements. The filtering formulation improves robustness towards false classifications when evaluating the method on examples from animal conservation in the Swedish forests.
@phdthesis{diva2:1805410,
author = {Malmström, Magnus},
title = {{Approximative Uncertainty in Neural Network Predictions}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 2358}},
year = {2023},
address = {Sweden},
}
The protection of confidential data is a fundamental need in the society in which we live. This task becomes more relevant when observing that every day, data traffic increases exponentially, as well as the number of attacks on the telecommunication infra-structure. From the natural sciences, it has been strongly argued that quantum communication has great potential to solve this problem, to such an extent that various governmental and industrial entities believe the protection provided by quantum communications will be an important layer in the field of information security in the next decades. However, integrating quantum technologies both in current optical networks and in industrial systems is not a trivial task, taking into account that a large part of current quantum optical systems are based on bulk optical devices, which could become an important limitation. Throughout this thesis we present an all-in-fiber optical platform that allows a wide range of tasks that aim to take a step forward in terms of generation and detection of photonic states. Among the main features, the generation and detection of photonic quantum states carrying orbital angular momentum stand out.
The platform can also be configured for the generation of random numbers from quantum mechanical measurements, a central aspect in future information tasks.
Our scheme is based on the use of new space-division-multiplexing (SDM) technologies such as few-mode-fibers and photonic lanterns. Furthermore, our platform can also be scaled to high dimensions, it operates in 1550 nm (telecommunications band) and all the components used for its implementation are commercially available. The results presented in this thesis can be a solid alternative to guarantee the compatibility of new SDM technologies in emerging experiments on optical networks and open up new possibilities for quantum communication.
@phdthesis{diva2:1797425,
author = {Alarcón, Alvaro},
title = {{All-Fiber System for Photonic States Carrying Orbital Angular Momentum:
A Platform for Classical and Quantum Information Processing}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 2340}},
year = {2023},
address = {Sweden},
}
In Model Predictive Control (MPC), optimization problems are solved recurrently to produce control actions. When MPC is used in real time to control safety-critical systems, it is important to solve these optimization problems with guarantees on the worst-case execution time. In this thesis, we take aim at such worst-case guarantees through two complementary approaches:
(i) By developing methods that determine exact worst-case bounds on the computational complexity and execution time for deployed optimization solvers.
(ii) By developing efficient optimization solvers that are tailored for the given application and hardware at hand.
We focus on linear MPC, which means that the optimization problems in question are quadratic programs (QPs) that depend on parameters such as system states and reference signals. For solving such QPs, we consider active-set methods: a popular class of optimization algorithms used in real-time applications.
The first part of the thesis concerns complexity certification of well-established active-set methods. First, we propose a certification framework that determines the sequence of subproblems that a class of active-set algorithms needs to solve, for every possible QP instance that might arise from a given linear MPC problem (i.e., for every possible state and reference signal). By knowing these sequences, one can exactly bound the number of iterations and/or floating-point operations that are required to compute a solution. In a second contribution, we use this framework to determine the exact worst-case execution time (WCET) for linear MPC. This requires factors such as hardware and software implementation/compilation to be accounted for in the analysis. The framework is further extended in a third contribution by accounting for internal numerical errors in the solver that is certified. In a similar vein, a fourth contribution extends the framework to handle proximal-point iterations, which can be used to improve the numerical stability of QP solvers, furthering their reliability.
The second part of the thesis concerns efficient solvers for real-time MPC. We propose an efficient active-set solver that is contained in the above-mentioned complexity-certification framework. In addition to being real-time certifiable, we show that the solver is efficient, simple to implement, can easily be warm-started, and is numerically stable, all of which are important properties for a solver that is used in real-time MPC applications. As a final contribution, we use this solver to exemplify how the proposed complexity-certification framework developed in the first part can be used to tailor active-set solvers for a given linear MPC application. Specifically, we do this by constructing and certifying parameter-varying initializations of the solver.
@phdthesis{diva2:1755033,
author = {Arnström, Daniel},
title = {{Real-Time Certified MPC:
Reliable Active-Set QP Solvers}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 2324}},
year = {2023},
address = {Sweden},
}
The trend of automation in industry, and in the society in general, is something that probably all of us have noticed. The mining industry is no exception to this trend, and there exists a vision of having completely automated mines with all processes monitored and controlled through a higher level optimization goal. For this vision, access to a reliable positioning system has been identified a prerequisite. Underground mines posses extraordinary premises for localization, due to the harsh, unstructured and ever changing environment, where existing localization solutions struggle with accuracy and reliability over time.
This thesis addresses the problem of achieving accurate, robust and consistent position estimates for long-term autonomy of vehicles operating in an underground mining environment. The focus is on onboard positioning solutions utilizing sensor fusion within the probabilistic filtering framework, with extra emphasis on the characteristics of lidar data. Contributions are in the areas of improved state estimation algorithms, more efficient lidar data processing and development of models for changing environments. The problem descriptions and ideas in this thesis are sprung from underground localization issues, but many of the resulting solutions and methods are valid beyond this application.
In this thesis, internal localization algorithms and data processing techniques are analyzed in detail. The effects of tuning the parameters in an unscented Kalman filter are examined and guidelines for choosing suitable values are suggested. Proper parameter values are shown to substantially improve the position estimates for the underground application. Robust and efficient processing of lidar data is explored both through analysis of the information contribution of individual laser rays, and through preprocessing in terms of feature extraction. Methods suitable for available hardware are suggested, and it is shown how it is possible to maintain consistency in the state estimates with less computations.
Changes in the environment can be devastating for a localization system when characteristics of the observations no longer matches the provided map. One way to manage this is to extend the localization problem to simultaneous localization and mapping (slam). In its standard formulation, slam assumes a truly static surrounding. In this thesis a feature based multi-hypothesis map representation is developed that allows encoding of changes in the environment. The representation is verified to perform well for localization in scenarios where landmarks can attain one of many possible positions. Automatic creation of such maps are suggested with methods completely integrated with the slam framework. This results in a multi-hypothesis slam concept that can discover and adapt to changes in the operation area while at the same time producing consistent state estimates.
This thesis provides general insights in lidar data processing and state estimation in changing environments. For the underground mine application specifically, different methods presented in this thesis target different aspects of the higher goal of achieving robust and accurate position estimates. Together they present a collective view of how to design localization systems that produce reliable estimates for underground mining environments.
@phdthesis{diva2:1752033,
author = {Nielsen, Kristin},
title = {{Localization for Autonomous Vehicles in Underground Mines}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 2318}},
year = {2023},
address = {Sweden},
}
In this thesis, we focus on vulnerabilities and robustness of two wireless communication technologies: global navigation satellite system (GNSS), a technology that provides position-velocity-time information, and massive multiple-input-multiple-output (MIMO), a core cellular 5G technology. In particular, we investigate spoofing and jamming attacks to GNSS and massive MIMO, respectively, and the robust massive MIMO receiver against impulsive noises. In this context, spoofing refers to the situation in which a receiver identifies falsified signals, that are transmitted by the spoofers, as legitimate or trustable signals.
Jamming, on the other hand, refers to the transmission of radio signals that disrupt communications by decreasing the signal to interference plus noise ratio (SINR) on the receiver side.
The reason why we investigate impulsive noises is that the standard wireless receivers assume that the noise has Gaussian distribution. However, the impulsive noises may appear in any communication link. The difference between impulsive noises and standard Gaussian noises is that it is more likely to observe outliers in impulsive noises. Therefore, we question whether the standard Gaussian receivers are robust against impulsive noises and design robust receivers against impulsive noises.
More specifically, in paper A we analyze the effects of distributed jammers on massive MIMO and answer the following questions: Is massive MIMO more robust to distributed jammers compared with previous generation's cellular networks? Which jamming attack strategies are the best from the jammer's perspective, and can the jamming power be spread over space to achieve more harmful attacks?
In paper B, we propose a detector for GNSS receivers that is able to detect multiple spoofers without having any prior information about the attack strategy or the number of spoofers in the environment.
In paper C and D, we design robust receivers for massive MIMO against impulsive noise. In paper C, we model the noise having a Cauchy distribution and present a channel estimation technique, achievable rates and soft-decision metrics for coded signals. The main observation in paper C is that the proposed receiver works well in the presence of Cauchy and Gaussian noises, although the standard Gaussian receiver performs very bad when the noise has Cauchy distribution. In paper D, we compare two types of receivers, the Gaussian-mixture and the Cauchy-based, when the noise has symmetric alpha-stable (SαS) distributions. Based on the numerical results, the Gaussian-mixture receiver outperforms the Cauchy-based receiver.
@phdthesis{diva2:1747809,
author = {Gülgün, Ziya},
title = {{GNSS and Massive MIMO:
Spoofing, Jamming and Robust Receiver Design for Impulsive Noise}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 2310}},
year = {2023},
address = {Sweden},
}
A mobile robot, instructed by a human operator, acts in an environment with many other objects. However, for an autonomous robot, human instructions should be minimal and only high-level instructions, such as the ultimate task or destination. In order to increase the level of autonomy, it has become a foremost objective to mimic human vision using neural networks that take a stream of images as input and learn a specific computer vision task from large amounts of data. In this thesis, we explore several different models for surround sensing, each of which contributes to a higher understanding of the environment being possible.
As its first contribution, this thesis presents an object tracking method for video sequences, which is a crucial component in a perception system. This method predicts a fine-grained mask to separate the pixels corresponding to the target from those corresponding to the background. Rather than tracking location and size, the method tracks the initial pixels assigned to the target in this so-called video object segmentation. For subsequent time steps, the goal is to learn how the target looks using features from a neural network. We named our method A-GAME, based on the generative modeling of deep feature space, separating target and background appearances.
In the second contribution of this thesis, we detect, track, and segment all objects from a set of predefined object classes. This information is how the robot increases its capabilities to perceive the surroundings. We experiment with a graph neural network to weigh all new detections and existing tracks. This model outperforms prior works by separating visually, and semantically similar objects frame by frame.
The third contribution investigates one limitation of anchor-based detectors, which classify pre-defined bounding boxes as either negative or positive and thus provide a limited set of handled object shapes. One idea is to learn an alternative instance representation. We experiment with a neural network that predicts the distance to the nearest object contour in different directions from each pixel. The network then computes an approximated signed distance function containing the respective instance information.
Last, this thesis studies a concept within model validation. We observed that overfitting could increase performance on benchmarks. However, this opportunity is insipid for sensing systems in practice since measurements, such as length or angles, are quantities that explain the environment. The fourth contribution of this thesis is an extended validation technique for camera calibration. This technique uses a statistical model for each error difference between an observed value and a corresponding prediction of the projective model. We compute a test over the differences and detect if the projective model is incorrect.
@phdthesis{diva2:1745714,
author = {Brissman, Emil},
title = {{Learning to Analyze Visual Data Streams for Environment Perception}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 2283}},
year = {2023},
address = {Sweden},
}
As technology continues to advance, the interest in the relief of humans from tedious or dangerous tasks through automation increases. Some of the tasks that have received increasing attention are autonomous driving, disaster relief, and forestry inspection. Developing and deploying an autonomous robotic system to this type of unconstrained environments —in a safe way— is highly challenging. The system requires precise control and high-level decision making. Both of which require a robust and reliable perception system to understand the surroundings correctly.
The main purpose of perception is to extract meaningful information from the environment, be it in the form of 3D maps, dense classification of the type of object and surfaces, or high-level information about the position and direction of moving objects. Depending on the limitations and application of the system, various types of sensors can be used: lidars, to collect sparse depth information; cameras, to collect dense information for different parts of the visual spectra, of-ten the red-green-blue (RGB) bands; Inertial Measurements Units (IMUs), to estimate the ego motion; microphones, to interact and respond to humans; GPS receivers, to get global position information; just to mention a few.
This thesis investigates some of the necessities to approach the requirements of this type of system. Specifically, focusing on data-driven approaches, that is, machine learning, which has been shown time and again to be the main competitor for high-performance perception tasks in recent years. Although precision requirements might be high in industrial production plants, the environment is relatively controlled and the task is fixed. Instead, this thesis is studying some of the aspects necessary for complex, unconstrained environments, primarily outdoors and potentially near humans or other systems. The term in the wild refers exactly to the unconstrained nature of these environments, where the system can easily encounter something previously unseen and where the system might interact with unknowing humans. Some examples of environments are: city traffic, disaster relief scenarios, and dense forests.
This thesis will mainly focus on the following three key aspects necessary to handle the types of tasks and situations that could occur in the wild: 1) generalizing to a new environment, 2) adapting to new tasks and requirements, and 3) modeling uncertainty in the perception system.
First, a robotic system should be able to generalize to new environments and still function reliably. Papers B and G address this by using an intermediate representation to allow the system to handle much more diverse types of environment than otherwise possible. Paper B also investigates how robust the proposed autonomous driving system was to incorrect predictions, which is one of the likely results of changing the environment.
Second, a robot should be sufficiently adaptive to allow it to learn new tasks without forgetting the previous ones. Paper E proposed a way to allow incrementally adding new semantic classes to a trained model without access to the previous training data. The approach is based on utilizing the uncertainty in the predictions to model the unknown classes, marked as background.
Finally, the perception system will always be partially flawed, either because of the lack of modeling capabilities or because of ambiguities in the sensor data. To properly take this into account, it is fundamental that the system has the ability to estimate the certainty in the predictions. Paper F proposed a method for predicting the uncertainty in the model predictions when interpolating sparse data. Paper G addresses the ambiguities that exist when estimating the 3D pose of a human from a single camera image.
@phdthesis{diva2:1740415,
author = {Holmquist, Karl},
title = {{Data-Driven Robot Perception in the Wild}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 2293}},
year = {2023},
address = {Sweden},
}
One of the scopes of Systems Biology is to propose mathematical models that best capture the dynamic behavior of intra-cellular processes. In this regard, the last two decades have brought up a shift in the field, with technological advances now allowing researchers to access a wide range of high-throughput technologies at an affordable cost. These techniques allow to simultaneously interrogate thousands of variables, such as genome-wide transcriptomics and proteomics. However, parallel to these technological advances, there is a growing need for mathematical models that are suited to integrate measurements obtained from different cellular processes.
In this thesis we aim to model combinations of three commonly used high-throughput data: epigenetic (namely ATAC-seq and DNA methylation), transcriptomic (RNA-seq) and proteomic data (MASS-spectrometry). In the first work we analyze paired ATAC-seq and RNA-seq data to integrate measurements of (i) chromatin openness, (ii) transcription factors (TFs) availability and (iii) gene expression. To model these data, we use elementary causal motifs, a class of mathematical models which is suited to represent causal interactions between three nodes. Indeed, our analysis shows that the elementary causal motifs in the data are enriched for biologically relevant TF-gene interactions. Moreover, a significant overlap is observed between the causal motifs identified in datasets representing similar cell stimuli, suggesting that causal motifs represent a robust biological signal.
This work is then extended to include another class of high-throughput data: MASS-spectrometry. More precisely, we propose a framework to model the flow of events that goes from chromatin remodeling to splice variants expression, and from splice variants to protein synthesis. As the underlying graph becomes more complex than the previous case, a more general mathematical framework is considered: Bayesian networks. Interestingly, this work shows that most putative associations between chromatin regions, splice variants and proteins that have been gathered by scientific community so far, are supported by the data. Moreover, similarly to the previous work, the causal interactions identified in the data highlight relevant biological features; more precisely, causal chains between chromatin regions, splice variants and proteins are enriched for splice variants that have a major role in protein synthesis.
From a technical point of view, causal motifs are characterized by a property known as conditional independence, which can be used to identify causal interactions in the data. However, particularly when the data available is limited, it is challenging to assess conditional independencies in the data. It is therefore of interest to investigate the existence of properties that allow us to predict conditional independence. In particular, in our work we propose two properties: structural balance and inverse balance, which are closely connected to what is known in the literature as positive association and multivariate total positivity of order 2 (MTP2), respectively. Our analysis shows that both heuristics are useful in predicting conditional independence, both from a theoretical perspective and in experimental data.
Lastly, a network-based approach is used to integrate DNA methylation and RNA-seq in a case-control study centered around multiple sclerosis, in order to identify common regulatory patterns in DNA methylation and gene expression during the course of pregnancy. The strategy is based on the rationale that proteins that are interconnected in the protein-protein network are more likely to be involved in similar cellular functions. Indeed, the analysis highlights that similar pathways are altered at epigenetic and transcriptomic level, leading to a set of genes that are likely involved in the modification of the disease symptoms that is observed during pregnancy.
@phdthesis{diva2:1729981,
author = {Zenere, Alberto},
title = {{Integration of epigenetic, transcriptomic and proteomic data}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 2294}},
year = {2023},
address = {Sweden},
}
Transport is an integral part of society and one of its basic prerequisites. Society is now facing a transition as it must go from dependence on fossil fuels to sustainability. Despite large investments by the vehicle manufacturers, the transition needs to be accelerated for the two-degree (Celsius) target to be reached, which requires new innovations and solutions.
The development of computers has led to efficient software being available today to numerically solve optimization problems, which enables mathematical modeling and optimization as a systematic problem-solving method. However, taking advantage of the numerical solvers requires specialized knowledge and is a barrier for many engineers. To overcome this and make the problem-solving methodology available, tools that bridge the gap between the engineer’s problem and the numerical solvers are needed.
The dissertation covers the complete chain from problem to solution, with methods and tools that support the problem-solving process. Software for optimal control is investigated with the aim of making the numerical solvers available to the user. The result is a design based on the introduction of a domain-specific programming language. It makes it possible to automatically reformulate the user’s problem into a form that the computer can handle, while making the program more user-friendly by reducing the difference between the problem domain and the computer’s domain. The result has been developed together with the software Yop, which is used by engineers and researchers nationally and internationally to solve control engineering problems, in academia as well as in industry.
The software is used to investigate whether an electrified powertrain can be made more efficient by equipping the diesel engine with a larger and more efficient turbocharger, at the expense of increased inertia. The result indicates a gain and that the increased inertia can be compensated by the electric motor. As part of the work, a diesel engine model has been developed, where it has been investigated how relevant effects for turbocharger selection can be included in a way suitable for optimal control. The result is a validated and dynamic diesel engine model that has been made available to the research community through publications and open-source code.
@phdthesis{diva2:1709608,
author = {Leek, Viktor},
title = {{Optimal Control for Energy Efficient Vehicle Propulsion:
Methodology, Application, and Tools}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 2270}},
year = {2022},
address = {Sweden},
}
Senast uppdaterad: 2020-10-01