## Publikationer Institutionen för systemteknik

DatorseendeDatorteknik

Elektroniska Kretsar och System

Fordonssystem

Informationskodning

Kommunikationssystem

Reglerteknik

## Senaste doktorsavhandlingarna

Electronic devices with wireless connectivity are fast becoming a part of daily life. According to some estimates, in the next five years, 10 billion new devices with internet connectivity would be produced. To lower the costs and extend the battery life of electronic circuits, there is an increased interest in using lowcost, low-power CMOS circuits. By taking advantage of the higher integration capabilities of modern CMOS, the analog, digital, and radio circuits can be integrated on a single die, typically called a radio-frequency system-on-chip (RF-SoC).

In an RF-SoC, most of the power is usually consumed by the radio circuits, especially the power amplifier (PA). Hence, to take advantage of the improved switching capability of transistors in modern CMOS, the use of switch-mode PAs (SMPAs) is becoming more popular. SMPAs exhibit a much higher efficiency as compared to their linear counterparts and can be easily integrated with the digital baseband circuits.

To satisfy the demand for higher data throughput, modern wireless standards like LTE and IEEE 802.11 generate envelope-varying signals using advanced modulation schemes like M-QAM and OFDM. Among several other techniques, pulse-width modulation (PWM) allows for the amplification of the envelopevarying signals using SMPAs.

The first part of this thesis explores techniques to improve the spectral performance of PWM-based transmitters. The proposed transmitters are fully digital, and the entire signal chain up to the PA can be implemented using the digital design flow, which is especially beneficial in sub-micron CMOS processes with low voltage headroom. A new transmitter is proposed that compensates for the aliasing distortion in polar PWM transmitters by using outphasing. The transmitter exhibits an improvement of up to 9 dB in dynamic range for a 1.4 MHz LTE uplink signal. The idea is extended to compensate for both image and aliasing distortions in all-digital implementations of polar PWM transmitters. By using a field programmable gate array (FPGA) and Class-D SMPAs, the proposed transmitter shows an improvement of up to 6.9 dBc in the adjacent channel leakage ratio (ACLR) and 10% in the error vector magnitude (EVM) for a 20 MHz LTE uplink signal. The proposed transmitter is fully programmable and can be easily adapted for multi-band and multi-standard transmission.

To enhance the phase linearity of all-digital PWM transmitters, a new transmitter architecture based on outphasing is presented. The proposed transmitter uses outphasing to improve the phase resolution and exhibits an improvement of 2.8 dBc and 3.3% in ACLR and EVM, respectively.

The difference between the polar and quadrature implementations of RFPWM based transmitters is explored. By using mathematical derivations and simulations, it is shown that the polar implementation outperforms the quadrature implementation due to the lower quantization noise. An RF-PWM based transmitter that eliminates both image and aliasing distortions is presented. The proposed transmitter has an all-digital implementation, uses a single SMPA, and eliminates the need for a power combiner resulting in a more compact design. For a 1.4 MHz LTE uplink signal, the proposed transmitter exhibits an improvement of up to 11.3 dBc in ACLR.

The second part of this work focuses on the design of all-digital area-efficient architectures of time-to-digital converters (TDCs). A TDC is essentially a stopwatch with a pico-second resolution and can be used to accurately quantify the pulse width and position of PWM signals.

A Vernier delay line-based TDC is presented that replaces the conventionally used sampling D flip-flops by a single transistor. This resulting implementation does not suffer from blackout time associated with D flip-flops allowing for a more compact design. The proposed TDC achieves a time resolution of 5.7 ps, and consumes 1.85 mW of power while operating at 50 MS/s.

A modified switching scheme to reduce the power consumed by the thermometerto- binary encoder used in the TDCs is presented. By taking advantage of the operating nature of the TDCs, the proposed switching scheme reduces the power consumption by up to 40% for a 256-bit encoder.

```
@phdthesis{diva2:1275760,
author = {Touqir Pasha, Muhammad},
title = {{All-Digital PWM Transmitters}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1972}},
year = {2019},
address = {Sweden},
}
```

Models are commonly used to simulate events and processes, and can be constructed from measured data using system identification. The common way is to model the system from input to output, but in this thesis we want to obtain the inverse of the system.

Power amplifiers (PAs) used in communication devices can be nonlinear, and this causes interference in adjacent transmitting channels. A prefilter, called predistorter, can be used to invert the effects of the PA, such that the combination of predistorter and PA reconstructs an amplified version of the input signal. In this thesis, the predistortion problem has been investigated for outphasing power amplifiers, where the input signal is decomposed into two branches that are amplified separately by highly efficient nonlinear amplifiers and then recombined. We have formulated a model structure describing the imperfections in an outphasing \abbrPA and the matching ideal predistorter. The predistorter can be estimated from measured data in different ways. Here, the initially nonconvex optimization problem has been developed into a convex problem. The predistorters have been evaluated in measurements.

The goal with the inverse models in this thesis is to use them in cascade with the systems to reconstruct the original input. It is shown that the problems of identifying a model of a preinverse and a postinverse are fundamentally different. It turns out that the true inverse is not necessarily the best one when noise is present, and that other models and structures can lead to better inversion results.

To construct a predistorter (for a PA, for example), a model of the inverse is used, and different methods can be used for the estimation. One common method is to estimate a postinverse, and then using it as a preinverse, making it straightforward to try out different model structures. Another is to construct a model of the system and then use it to estimate a preinverse in a second step. This method identifies the inverse in the setup it will be used, but leads to a complicated optimization problem. A third option is to model the forward system and then invert it. This method can be understood using standard identification theory in contrast to the ones above, but the model is tuned for the forward system, not the inverse. Models obtained using the various methods capture different properties of the system, and a more detailed analysis of the methods is presented for linear time-invariant systems and linear approximations of block-oriented systems. The theory is also illustrated in examples.

When a preinverse is used, the input to the system will be changed, and typically the input data will be different than the original input. This is why the estimation of preinverses is more complicated than for postinverses, and one set of experimental data is not enough. Here, we have shown that identifying a preinverse in series with the system in repeated experiments can improve the inversion performance.

```
@phdthesis{diva2:1268936,
author = {Jung, Ylva},
title = {{Inverse system identification with applications in predistortion}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1966}},
year = {2018},
address = {Sweden},
}
```

Automatic decision making and pattern recognition under uncertainty are difficult tasks that are ubiquitous in our everyday life. The systems we design, and technology we develop, requires us to coherently represent and work with uncertainty in data. Probabilistic models and probabilistic inference gives us a powerful framework for solving this problem. Using this framework, while enticing, results in difficult-to-compute integrals and probabilities when conditioning on the observed data. This means we have a need for approximate inference, methods that solves the problem approximately using a systematic approach. In this thesis we develop new methods for efficient approximate inference in probabilistic models.

There are generally two approaches to approximate inference, variational methods and Monte Carlo methods. In Monte Carlo methods we use a large number of random samples to approximate the integral of interest. With variational methods, on the other hand, we turn the integration problem into that of an optimization problem. We develop algorithms of both types and bridge the gap between them.

First, we present a self-contained tutorial to the popular sequential Monte Carlo (SMC) class of methods. Next, we propose new algorithms and applications based on SMC for approximate inference in probabilistic graphical models. We derive nested sequential Monte Carlo, a new algorithm particularly well suited for inference in a large class of high-dimensional probabilistic models. Then, inspired by similar ideas we derive interacting particle Markov chain Monte Carlo to make use of parallelization to speed up approximate inference for universal probabilistic programming languages. After that, we show how we can make use of the rejection sampling process when generating gamma distributed random variables to speed up variational inference. Finally, we bridge the gap between SMC and variational methods by developing variational sequential Monte Carlo, a new flexible family of variational approximations.

```
@phdthesis{diva2:1262062,
author = {Andersson Naesseth, Christian},
title = {{Machine learning using approximate inference:
Variational and sequential Monte Carlo methods}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1969}},
year = {2018},
address = {Sweden},
}
```

Target tracking is a mature topic with over half a century of mainly military and aviation research. The field has lately expanded into a range of civilian applications due to the development of cheap sensors and improved computational power. With the rise of new applications, new challenges emerge, and with better hardware there is an opportunity to employ more elaborated algorithms.

There are five main contributions to the field of target tracking in this thesis. Contributions I-IV concern the development of non-conventional models for target tracking and the resulting estimation methods. Contribution V concerns a reformulation for improved performance. To show the functionality and applicability of the contributions, all proposed methods are applied to and verified on experimental data related to tracking of animals or other objects in nature.

In Contribution I, sparse Gaussian processes are proposed to model behaviours of targets that are caused by influences from the environment, such as wind or obstacles. The influences are learned online as a part of the state estimation using an extended Kalman filter. The method is also adapted to handle time-varying influences and to identify dynamic systems. It is shown to improve accuracy over the nearly constant velocity and acceleration models in simulation. The method is also evaluated in a sea ice tracking application using data from a radar on Svalbard.

In Contribution II, a state-space model is derived that incorporates observations with uncertain timestamps. An example of such observations could be traces left by a target. Estimation accuracy is shown to be better than the alternative of disregarding the observation. The position of an orienteering sprinter is improved using the control points as additional observations.

In Contribution III, targets that are confined to a certain space, such as animals in captivity, are modelled to avoid collision with the boundaries by turning. The proposed model forces the predictions to remain inside the confined space compared to conventional models that may suffer from infeasible predictions. In particular the model improves robustness against occlusions. The model is successfully used to track dolphins in a dolphinarium as they swim in a basin with occluded sections.

In Contribution IV, an extension to the jump Markov model is proposed that incorporates observations of the mode that are state-independent. Normally, the mode is estimated by comparing actual and predicted observations of the state. However, sensor signals may provide additional information directly dependent on the mode. Such information from a video recorded by biologists is used to estimate take-off times and directions of birds captured in circular cages. The method is shown to compare well with a more time-consuming manual method.

In Contribution V, a reformulation of the labelled multi-Bernoulli filter is used to exploit a structure of the algorithm to attain a more efficient implementation.Modern target tracking algorithms are often very demanding, so sound approximations and clever implementations are needed to obtain reasonable computational performance. The filter is integrated in a full framework for tracking sea ice, from pre-processing to presentation of results.

```
@phdthesis{diva2:1259864,
author = {Veibäck, Clas},
title = {{Tracking the Wanders of Nature}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1958}},
year = {2018},
address = {Sweden},
}
```

Massive MIMO (Multiple-Input--Multiple-Output) is a cellular-network technology in which the base station is equipped with a large number of antennas and aims to serve several different users simultaneously, on the same frequency resource through spatial multiplexing. This is made possible by employing efficient beamforming, based on channel estimates acquired from uplink reference signals, where the base station can transmit the signals in such a way that they add up constructively at the users and destructively elsewhere. The multiplexing together with the array gain from the beamforming can increase the spectral efficiency over contemporary systems.

One challenge of practical importance is how to transmit data in the downlink when no channel state information is available. When a user initially joins the network, prior to transmitting uplink reference signals that enable beamforming, it needs system information---instructions on how to properly function within the network. It is transmission of system information that is the main focus of this thesis. In particular, the thesis analyzes how the reliability of the transmission of system information depends on the available amount of diversity. It is shown how downlink reference signals, space-time block codes, and power allocation can be used to improve the reliability of this transmission.

In order to estimate the uplink and downlink channels from uplink reference signals, which is imperative to ensure scalability in the number of base station antennas, massive MIMO relies on channel reciprocity. This thesis shows that the principles of channel reciprocity can also be exploited by a jammer, a malicious transmitter, aiming to disrupt legitimate communication between two single-antenna devices. A heuristic scheme is proposed in which the jammer estimates the channel to a target device blindly, without any knowledge of the transmitted legitimate signals, and subsequently beamforms noise towards the target. Under the same power constraint, the proposed jammer can disrupt the legitimate link more effectively than a conventional omnidirectional jammer in many cases.

```
@phdthesis{diva2:1235976,
author = {Karlsson, Marcus},
title = {{Blind Massive MIMO Base Stations:
Downlink Transmission and Jamming}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1950}},
year = {2018},
address = {Sweden},
}
```

Using images to reconstruct the world in three dimensions is a classical computer vision task. Some examples of applications where this is useful are autonomous mapping and navigation, urban planning, and special effects in movies. One common approach to 3D reconstruction is ”structure from motion” where a scene is imaged multiple times from different positions, e.g. by moving the camera. However, in a twist of irony, many structure from motion methods work best when the camera is stationary while the image is captured. This is because the motion of the camera can cause distortions in the image that lead to worse image measurements, and thus a worse reconstruction. One such distortion common to all cameras is motion blur, while another is connected to the use of an electronic rolling shutter. Instead of capturing all pixels of the image at once, a camera with a rolling shutter captures the image row by row. If the camera is moving while the image is captured the rolling shutter causes non-rigid distortions in the image that, unless handled, can severely impact the reconstruction quality.

This thesis studies methods to robustly perform 3D reconstruction in the case of a moving camera. To do so, the proposed methods make use of an inertial measurement unit (IMU). The IMU measures the angular velocities and linear accelerations of the camera, and these can be used to estimate the trajectory of the camera over time. Knowledge of the camera motion can then be used to correct for the distortions caused by the rolling shutter. Another benefit of an IMU is that it can provide measurements also in situations when a camera can not, e.g. because of excessive motion blur, or absence of scene structure.

To use a camera together with an IMU, the camera-IMU system must be jointly calibrated. The relationship between their respective coordinate frames need to be established, and their timings need to be synchronized. This thesis shows how to automatically perform this calibration and synchronization, without requiring e.g. calibration objects or special motion patterns.

In standard structure from motion, the camera trajectory is modeled as discrete poses, with one pose per image. Switching instead to a formulation with a continuous-time camera trajectory provides a natural way to handle rolling shutter distortions, and also to incorporate inertial measurements. To model the continuous-time trajectory, many authors have used splines. The ability for a spline-based trajectory to model the real motion depends on the density of its spline knots. Choosing a too smooth spline results in approximation errors. This thesis proposes a method to estimate the spline approximation error, and use it to better balance camera and IMU measurements, when used in a sensor fusion framework. Also proposed is a way to automatically decide how dense the spline needs to be to achieve a good reconstruction.

Another approach to reconstruct a 3D scene is to use a camera that directly measures depth. Some depth cameras, like the well-known Microsoft Kinect, are susceptible to the same rolling shutter effects as normal cameras. This thesis quantifies the effect of the rolling shutter distortion on 3D reconstruction, depending on the amount of motion. It is also shown that a better 3D model is obtained if the depth images are corrected using inertial measurements.

```
@phdthesis{diva2:1220622,
author = {Ovr\'{e}n, Hannes},
title = {{Continuous Models for Cameras and Inertial Sensors}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1951}},
year = {2018},
address = {Sweden},
}
```

Visual tracking is one of the fundamental problems in computer vision. Its numerous applications include robotics, autonomous driving, augmented reality and 3D reconstruction. In essence, visual tracking can be described as the problem of estimating the trajectory of a target in a sequence of images. The target can be any image region or object of interest. While humans excel at this task, requiring little effort to perform accurate and robust visual tracking, it has proven difficult to automate. It has therefore remained one of the most active research topics in computer vision.

In its most general form, no prior knowledge about the object of interest or environment is given, except for the initial target location. This general form of tracking is known as generic visual tracking. The unconstrained nature of this problem makes it particularly difficult, yet applicable to a wider range of scenarios. As no prior knowledge is given, the tracker must learn an appearance model of the target on-the-fly. Cast as a machine learning problem, it imposes several major challenges which are addressed in this thesis.

The main purpose of this thesis is the study and advancement of the, so called, Discriminative Correlation Filter (DCF) framework, as it has shown to be particularly suitable for the tracking application. By utilizing properties of the Fourier transform, a correlation filter is discriminatively learned by efficiently minimizing a least-squares objective. The resulting filter is then applied to a new image in order to estimate the target location.

This thesis contributes to the advancement of the DCF methodology in several aspects. The main contribution regards the learning of the appearance model: First, the problem of updating the appearance model with new training samples is covered. Efficient update rules and numerical solvers are investigated for this task. Second, the periodic assumption induced by the circular convolution in DCF is countered by proposing a spatial regularization component. Third, an adaptive model of the training set is proposed to alleviate the impact of corrupted or mislabeled training samples. Fourth, a continuous-space formulation of the DCF is introduced, enabling the fusion of multiresolution features and sub-pixel accurate predictions. Finally, the problems of computational complexity and overfitting are addressed by investigating dimensionality reduction techniques.

As a second contribution, different feature representations for tracking are investigated. A particular focus is put on the analysis of color features, which had been largely overlooked in prior tracking research. This thesis also studies the use of deep features in DCF-based tracking. While many vision problems have greatly benefited from the advent of deep learning, it has proven difficult to harvest the power of such representations for tracking. In this thesis it is shown that both shallow and deep layers contribute positively. Furthermore, the problem of fusing their complementary properties is investigated.

The final major contribution of this thesis regards the prediction of the target scale. In many applications, it is essential to track the scale, or size, of the target since it is strongly related to the relative distance. A thorough analysis of how to integrate scale estimation into the DCF framework is performed. A one-dimensional scale filter is proposed, enabling efficient and accurate scale estimation.

```
@phdthesis{diva2:1201230,
author = {Danelljan, Martin},
title = {{Learning Convolution Operators for Visual Tracking}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1926}},
year = {2018},
address = {Sweden},
}
```

The past decades have seen a rapid growth of mobile data traffic,both in terms of connected devices and data rate. To satisfy the evergrowing data traffic demand in wireless communication systems, thecurrent cellular systems have to be redesigned to increase both spectralefficiency and energy efficiency. Massive MIMO(Multiple-Input-Multiple-Output) is one solution that satisfy bothrequirements. In massive MIMO systems, hundreds of antennas areemployed at the base station to provide service to many users at thesame time and frequency. This enables the system to serve the userswith uniformly good quality of service simultaneously, with low-costhardware and without using extra bandwidth and energy. To achievethis, proper resource allocation is needed. Among the availableresources, transmit power beamforming are the most important degrees offreedom to control the spectral efficiency and energy efficiency. Dueto the use of excessive number of antennas and low-end hardware at thebase station, new aspects of power allocation and beamforming compared to currentsystems arises.

In the first part of the thesis, new uplink power allocation schemes that based on long term channel statistics isproposed. Since quality of the channel estimates is crucial in massive MIMO, in addition to data power allocation, joint power allocationthat includes the pilot power as additional variable should be considered. Therefore a new framework for power allocation thatmatches practical systems is developed, as the methods developed in the literature cannot be applied directly to massive MIMO systems. Simulation results confirm the advantages brought by the the proposed new framework.

In the second part, we introduces a new approach to solve the joint precoding and power allocation for different objective in downlink scenarios by a combination of random matrix theory and optimization theory. The new approach results in a simplified problem that, though non-convex, obeys a simple separable structure. Simulation results showed that the proposed scheme provides large gains over heuristic solutions when the number of users in the cell is large, which is suitable for applying in massive MIMO systems.

In the third part we investigate the effects of using low-end amplifiers at the basestations. The non-linear behavior of power consumption in these amplifiers changes the power consumption model at the basestation, thereby changes the power allocation and beamforming design. Different scenarios are investigated and resultsshow that a certain number of antennas can be turned off in some scenarios.

In the last part we consider the use of non-orthogonal-multiple-access (NOMA) inside massive MIMO systems in practical scenarios where channel state information (CSI) is acquired through pilot signaling. Achievable rate analysis is carried out for different pilot signaling schemes including both uplink and downlink pilots. Numerical results show that when downlink CSI is available at the users, our proposed NOMA scheme outperforms orthogonal schemes. However with more groups of users present in the cell, it is preferable to use multi-user beamforming in stead of NOMA.

```
@phdthesis{diva2:1190488,
author = {Cheng, Hei Victor},
title = {{Optimizing Massive MIMO:
Precoder Design and Power Allocation}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1929}},
year = {2018},
address = {Sweden},
}
```

The international marine shipping industry is responsible for the transport of around 90% of the total world trade. Low-speed two-stroke diesel engines usually propel the largest trading ships. This engine type choice is mainly motivated by its high fuel efficiency and the capacity to burn cheap low-quality fuels. To reduce the marine freight impact on the environment, the International Maritime Organization (IMO) has introduced stricter limits on the engine pollutant emissions. One of these new restrictions, named Tier III, sets the maximum NOx emissions permitted. New emission reduction technologies have to be developed to fulfill the Tier III limits on two-stroke engines since adjusting the engine combustion alone is not sufficient. There are several promising technologies to achieve the required NOx reductions, Exhaust Gas Recirculation (EGR) is one of them. For automotive applications, EGR is a mature technology, and many of the research findings can be used directly in marine applications. However, there are some differences in marine two-stroke engines, which require further development to apply and control EGR.

The number of available engines for testing EGR controllers on ships and test beds is low due to the recent introduction of EGR. Hence, engine simulation models are a good alternative for developing controllers, and many different engine loading scenarios can be simulated without the high costs of running real engine tests. The primary focus of this thesis is the development and validation of models for two-stroke marine engines with EGR. The modeling follows a Mean Value Engine Model (MVEM) approach, which has a low computational complexity and permits faster than real-time simulations suitable for controller testing. A parameterization process that deals with the low measurement data availability, compared to the available data on automotive engines, is also investigated and described. As a result, the proposed model is parameterized to two different two-stroke engines showing a good agreement with the measurements in both stationary and dynamic conditions.

Several engine components have been developed. One of these is a new analytic in-cylinder pressure model that captures the influence of the injection and exhaust valve timings without increasing the simulation time. A new compressor model that can extrapolate to low speeds and pressure ratios in a physically sound way is also described. This compressor model is a requirement to be able to simulate low engine loads. Moreover, a novel parameterization algorithm is shown to handle well the model nonlinearities and to obtain a good model agreement with a large number of tested compressor maps. Furthermore, the engine model is complemented with dynamic models for ship and propeller to be able to simulate transient sailing scenarios, where good EGR controller performance is crucial. The model is used to identify the low load area as the most challenging for the controller performance, due to the slower engine air path dynamics. Further low load simulations indicate that sensor bias can be problematic and lead to an undesired black smoke formation, while errors in the parameters of the controller flow estimators are not as critical. This result is valuable because for a newly built engine a proper sensor setup is more straightforward to verify than to get the right parameters for the flow estimators.

```
@phdthesis{diva2:1178537,
author = {Llamas, Xavier},
title = {{Modeling and Control of EGR on Marine Two-Stroke Diesel Engines}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1904}},
year = {2018},
address = {Sweden},
}
```

Massive MIMO (multiple-input–multiple-output) is a multi-antenna technology for cellular wireless communication, where the base station uses a large number of individually controllable antennas to multiplex users spatially. This technology can provide a high spectral efficiency. One of its main challenges is the immense hardware complexity and cost of all the radio chains in the base station. To make massive MIMO commercially viable, inexpensive, low-complexity hardware with low linearity has to be used, which inherently leads to more signal distortion. This thesis investigates how the degenerated linearity of some of the main components—power amplifiers, analog-to-digital converters (ADCs) and low-noise amplifiers—affects the performance of the system, with respect to data rate, power consumption and out-of-band radiation. The main results are: Spatial processing can reduce PAR (peak-to-average ratio) of the transmit signals in the downlink to as low as 0B; this, however, does not necessarily reduce power consumption. In environments with isotropic fading, one-bit ADCs lead to a reduction in effective signal-to-interference-and-noise ratio (SINR) of 4dB in the uplink and four-bit ADCs give a performance close to that of an unquantized system. An analytical expression for the radiation pattern of the distortion from nonlinear power amplifiers is derived. It shows how the distortion is beamformed to some extent, that its gain never is greater than that of the desired signal, and that the gain of the distortion is reduced with a higher number of served users and a higher number of channel taps. Nonlinear low-noise amplifiers give rise to distortion that partly combines coherently and limits the possible SINR. It is concluded that spatial processing with a large number of antennas reduces the impact of hardware distortion in most cases. As long as proper attention is paid to the few sources of coherent distortion, the hardware complexity can be reduced in massive MIMO base stations to overcome the hardware challenge and make massive MIMO commercial reality.

```
@phdthesis{diva2:1163832,
author = {Moll\'{e}n, Christopher},
title = {{High-End Performance with Low-End Hardware:
Analysis of Massive MIMO Base Station Transceivers}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1896}},
year = {2017},
address = {Sweden},
}
```

Senast uppdaterad: 2015-05-25