Publikationer Informationskodning
Journal papers
The orbital angular momentum (OAM) spatial degree of freedom of light has been widely explored in many applications, including telecommunications, quantum information, and light-based micromanipulation. The ability to separate and distinguish between the different transverse spatial modes is called mode sorting or mode demultiplexing, and it is essential to recover the encoded information in such applications. An ideal d mode sorter should be able to faithfully distinguish between the different d spatial modes, with minimal losses, and have d outputs and fast response times. All previous mode sorters rely on bulk optical elements, such as spatial light modulators, which cannot be quickly tuned and have additional losses if they are to be integrated with optical fiber systems. Here, we propose and experimentally demonstrate, to the best of our knowledge, the first all-in-fiber method for OAM mode sorting with ultrafast dynamic reconfigurability. Our scheme first decomposes the OAM mode in-fiber-optical linearly polarized (LP) modes and then interferometrically recombines them to determine the topological charge, thus correctly sorting the OAM mode. In addition, our setup can also be used to perform ultrafast routing of the OAM modes. These results show a novel and fiber-integrated form of optical spatial mode sorting that can be readily used for many new applications in classical and quantum information processing.
@article{diva2:1805285,
author = {Alarcon, Alvaro and Gomez, Santiago and Spegel-Lexne, Daniel and Argillander, Joakim and Carine, Jaime and Canas, Gustavo and Lima, Gustavo and Xavier, Guilherme B},
title = {{All-in-Fiber Dynamically Reconfigurable Orbital Angular Momentum Mode Sorting}},
journal = {ACS Photonics},
year = {2023},
volume = {10},
number = {10},
pages = {3700--3707},
}
Unique digital circuit outputs, considered as physical unclonable function (PUF) circuit outputs, can facilitate a secure and reliable secret key agreement. To tackle noise and high correlations between the PUF circuit outputs, transform coding methods combined with scalar quantizers are typically applied to extract the uncorrelated bit sequences reliably. In this paper, we create realistic models for these transformed outputs by fitting truncated distributions to them. We also show that the state-of-the-art models are inadequate to guarantee a target reliability level for all PUF outputs, which also means that secrecy cannot be guaranteed. Therefore, we introduce a quality of security parameter to control the percentage of the PUF circuit outputs for which a target security level can be guaranteed. By applying the finite-length information theory results to a public ring oscillator output dataset, we illustrate that security guarantees can be provided for each bit extracted from any PUF device by eliminating only a small subset of PUF circuit outputs. Furthermore, we conversely show that it is not possible to provide reliability or security guarantees without eliminating any PUF circuit output. Our holistic methods and analyses can be applied to any PUF type, as well as any biometric secrecy system, with continuous-valued outputs to extract secret keys with low hardware complexity.
@article{diva2:1798547,
author = {Günlü, Onur and Schaefer, Rafael F. and Poor, H. Vincent},
title = {{Quality of Security Guarantees for and with Physical Unclonable Functions and Biometric Secrecy Systems}},
journal = {Entropy},
year = {2023},
volume = {25},
number = {8},
}
The distributed source coding problem is extended by positing that noisy measurements of a remote source are the correlated random variables that should be reconstructed at another terminal. We consider a secure and private distributed lossy source coding problem with two encoders and one decoder such that (i) all terminals noncausally observe a noisy measurement of the remote source; (ii) a private key is available to each legitimate encoder and all private keys are available to the decoder; (iii) rate-limited noiseless communication links are available between each encoder and the decoder; (iv) the amount of information leakage to an eavesdropper about the correlated random variables is defined as (v) secrecy leakage, and privacy leakage is measured with respect to the remote source; and (vi) two passive attack scenarios are considered, where a strong eavesdropper can access both communication links and a weak eavesdropper can choose only one of the links to access. Inner and outer bounds on the rate regions defined under secrecy, privacy, communication, and distortion constraints are derived for both passive attack scenarios. When one or both sources should be reconstructed reliably, the rate region bounds are simplified.
@article{diva2:1792458,
author = {Günlü, Onur and Schaefer, Rafael F. and Boche, Holger and Poor, H. Vincent},
title = {{Secure and Private Distributed Source Coding With Private Keys and Decoder Side Information}},
journal = {IEEE Transactions on Information Forensics and Security},
year = {2023},
volume = {18},
pages = {3803--3816},
}
True random number generation is not thought to be possible using a classical approach but by instead exploiting quantum mechanics genuine randomness can be achieved. Here, the authors demonstrate a certified quantum random number generation using a metal-halide perovskite light emitting diode as a source of weak coherent polarisation states randomly producing an output of either 0 or 1. The recent development of perovskite light emitting diodes (PeLEDs) has the potential to revolutionize the fields of optical communication and lighting devices, due to their simplicity of fabrication and outstanding optical properties. Here we demonstrate that PeLEDs can also be used in the field of quantum technologies by implementing a highly-secure quantum random number generator (QRNG). Modern QRNGs that certify their privacy are posed to replace classical random number generators in applications such as encryption and gambling, and therefore need to be cheap, fast and with integration capabilities. Using a compact metal-halide PeLED source, we generate random numbers, which are certified to be secure against an eavesdropper, following the quantum measurement-device-independent scenario. The obtained generation rate of more than 10 Mbit s(-1), which is already comparable to commercial devices, shows that PeLEDs can work as high-quality light sources for quantum information tasks, thus opening up future applications in quantum technologies.
@article{diva2:1784579,
author = {Argillander, Joakim and Alarcon, Alvaro and Bao, Chunxiong and Kuang, Chaoyang and Lima, Gustavo and Gao, Feng and Xavier, Guilherme B.},
title = {{Quantum random number generation based on a perovskite light emitting diode}},
journal = {Communications Physics},
year = {2023},
volume = {6},
number = {1},
}
A methodology for the generation of representative driving cycles is proposed and evaluated. The proposed method combines traffic simulation and driving behavior modeling to generate mission-based driving cycles. Extensions to the existing behavioral model in a traffic simulation tool are suggested and parameterized for different driver categories to capture the effects of road geometry and variances between drivers. The evaluation results illustrate that the developed extensions significantly improve the match between driving data and the driving cycles generated by traffic simulation. Using model extensions parameterized for different driver categories, instead of only one average driver, provides the possibility to represent different driving behaviors and further improve the realism of the resulting driving cycles.
@article{diva2:1764935,
author = {Kharrazi, Sogol and Nielsen, Lars and Frisk, Erik},
title = {{Generation of Mission-Based Driving Cycles Using Behavioral Models Parameterized for Different Driver Categories}},
journal = {SAE technical paper series},
year = {2023},
}
Photonic spatial quantum states are a subject of great interest for applications in quantum communication. One important challenge has been how to dynamically generate these states using only fiber-optical components. Here we propose and experimentally demonstrate an all-fiber system that can dynamically switch between any general transverse spatial qubit state based on linearly polarized modes. Our platform is based on a fast optical switch based on a Sagnac interferometer combined with a photonic lantern and few-mode optical fibers. We show switching times between spatial modes on the order of 5 ns and demonstrate the applicability of our scheme for quantum technologies by demonstrating a measurement-device-independent (MDI) quantum random number generator based on our platform. We run the generator continuously over 15 hours, acquiring over 13.46 Gbits of random numbers, of which we ensure that at least 60.52% are private, following the MDI protocol. Our results show the use of photonic lanterns to dynamically create spatial modes using only fiber components, which due to their robustness and integration capabilities, have important consequences for photonic classical and quantum information processing.(c) 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement
@article{diva2:1758676,
author = {Alarcon, Alvaro and Argillander, Joakim and Spegel-Lexne, Daniel and Xavier, Guilherme B},
title = {{Dynamic generation of photonic spatial quantum states with an all-fiber platform}},
journal = {Optics Express},
year = {2023},
volume = {31},
number = {6},
pages = {10673--10683},
}
@article{diva2:1755055,
author = {Günlü, Onur and Schaefer, Rafael F. and Boche, Holger and Poor, H. Vincent},
title = {{Information Theoretic Methods for Future Communication Systems}},
journal = {Entropy},
year = {2023},
volume = {25},
number = {3},
}
Future brain-computer interfaces will require local and highly individualized signal processing of fully integrated electronic circuits within the nervous system and other living tissue. New devices will need to be developed that can receive data from a sensor array, process these data into meaningful information, and translate that information into a format that can be interpreted by living systems. Here, the first example of interfacing a hardware-based pattern classifier with a biological nerve is reported. The classifier implements the Widrow-Hoff learning algorithm on an array of evolvable organic electrochemical transistors (EOECTs). The EOECTs channel conductance is modulated in situ by electropolymerizing the semiconductor material within the channel, allowing for low voltage operation, high reproducibility, and an improvement in state retention by two orders of magnitude over state-of-the-art OECT devices. The organic classifier is interfaced with a biological nerve using an organic electrochemical spiking neuron to translate the classifiers output to a simulated action potential. The latter is then used to stimulate muscle contraction selectively based on the input pattern, thus paving the way for the development of adaptive neural interfaces for closed-loop therapeutic systems.
@article{diva2:1750464,
author = {Gerasimov, Jennifer and Tu, Deyu and Hitaishi, Vivek and Padinhare, Harikesh and Yang, Chiyuan and Abrahamsson, Tobias and Karami Rad, Meysam and Donahue, Mary and Silverå Ejneby, Malin and Berggren, Magnus and Forchheimer, Robert and Fabiano, Simone},
title = {{A Biologically Interfaced Evolvable Organic Pattern Classifier}},
journal = {Advanced Science},
year = {2023},
volume = {10},
number = {14},
}
We present a method to estimate the time-to-impact (TTI) from a sequence of images. The method is based on detecting and tracking local extremal points. Their endurance within and between pixels is measured, accumulated, and used to achieve the TTI. This method, which improves on an earlier proposal, is entirely different from the ordinary optical flow technique and allows for fast and low-complex processing. The method is inspired by insects, which have some TTI capability without the possibility to compute high-complex optical flow. The method is further suitable for near-sensor image processing architectures. (c) The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
@article{diva2:1738474,
author = {Astrom, Anders and Forchheimer, Robert},
title = {{Statistical approach to time-to-impact estimation suitable for real-time near-sensor implementation}},
journal = {Journal of Electronic Imaging (JEI)},
year = {2022},
volume = {31},
number = {6},
}
We extend the problem of secure source coding by considering a remote source whose noisy measurements are correlated random variables used for secure source reconstruction. The main additions to the problem are as follows: (1) all terminals noncausally observe a noisy measurement of the remote source; (2) a private key is available to all legitimate terminals; (3) the public communication link between the encoder and decoder is rate-limited; and (4) the secrecy leakage to the eavesdropper is measured with respect to the encoder input, whereas the privacy leakage is measured with respect to the remote source. Exact rate regions are characterized for a lossy source coding problem with a private key, remote source, and decoder side information under security, privacy, communication, and distortion constraints. By replacing the distortion constraint with a reliability constraint, we obtain the exact rate region for the lossless case as well. Furthermore, the lossy rate region for scalar discrete-time Gaussian sources and measurement channels is established. An achievable lossy rate region that can be numerically computed is also provided for binary-input multiple additive discrete-time Gaussian noise measurement channels.
@article{diva2:1733184,
author = {Günlü, Onur and Schaefer, Rafael F. and Boche, Holger and Poor, Harold Vincent},
title = {{Private Key and Decoder Side Information for Secure and Private Source Coding}},
journal = {Entropy},
year = {2022},
volume = {24},
number = {12},
}
A central result in the foundations of quantum mechanics is the Kochen-Specker theorem. In short, it states that quantum mechanics is in conflict with classical models in which the result of a measurement does not depend on which other compatible measurements are jointly performed. Here compatible measurements are those that can be implemented simultaneously or, more generally, those that are jointly measurable. This conflict is generically called quantum contextuality. In this review, an introduction to this subject and its current status is presented. Several proofs of the Kochen-Specker theorem and different notions of contextuality are reviewed. How to experimentally test some of these notions is explained, and connections between contextuality and nonlocality or graph theory are discussed. Finally, some applications of contextuality in quantum information processing are reviewed.
@article{diva2:1730642,
author = {Budroni, Costantino and Cabello, Adan and Guehne, Otfried and Kleinmann, Matthias and Larsson, Jan-Åke},
title = {{Kochen-Specker contextuality}},
journal = {Reviews of Modern Physics},
year = {2022},
volume = {94},
number = {4},
}
The most well-known tool for studying contextuality in quantum computation is the n-qubit Stabilizer state tableau representation. We provide an extension that not only describes the quantum state but is also outcome deterministic. The extension enables a value assignment to exponentially many Pauli observables, yet it remains quadratic in both memory and computational complexity. Furthermore, we show that the mechanisms employed for contextuality and measurement disturbance are wholly separate. The model will be useful for investigating the role of contextuality in n-qubit quantum computation.
@article{diva2:1709369,
author = {Hindlycke, Christoffer and Larsson, Jan-Åke},
title = {{Efficient Contextual Ontological Model of n-Qubit Stabilizer Quantum Mechanics}},
journal = {Physical Review Letters},
year = {2022},
volume = {12},
number = {13},
}
The secure transfer of information is critical to the ever-increasing demands of the digital world. Continuous-variable quantum key distribution (CV-QKD) is a promising technology that can provide high secret key rates over metropolitan areas, using conventional telecom components. In this study, we demonstrate the utilization of CV-QKD over a 15 km multi-core fiber (MCF), in which we take advantage of one core to remotely frequency lock Bobs local oscillator with Alices transmitter. We also demonstrate the capacity of the MCF to boost the secret key rate by parallelizing CV-QKD across multiple cores. Our results indicate that MCFs are promising for the metropolitan deployment of QKD systems.
@article{diva2:1678822,
author = {Sarmiento, S. and Etcheverry, S. and Aldama, J. and Lopez, I. H. and Vidarte, L. T. and Xavier, Guilherme B and Nolan, D. A. and Stone, J. S. and Li, M. J. and Loeber, D. and Pruneri, V},
title = {{Continuous-variable quantum key distribution over a 15 km multi-core fiber}},
journal = {New Journal of Physics},
year = {2022},
volume = {24},
number = {6},
}
Quantum random number generators (QRNGs) are based on naturally random measurementresults performed on individual quantum systems. Here, we demonstrate a branching-pathphotonic QRNG implemented using a Sagnac interferometer with a tunable splitting ratio. Thefine-tuning of the splitting ratio allows us to maximize the entropy of the generated sequence ofrandom numbers and effectively compensate for tolerances in the components. By producingsingle-photons from attenuated telecom laser pulses, and employing commercially-availablecomponents we are able to generate a sequence of more than 2 gigabytes of random numberswith an average entropy of 7.99 bits/byte directly from the raw measured data. Furthermore, oursequence passes randomness tests from both the NIST and Dieharder statistical test suites, thuscertifying its randomness. Our scheme shows an alternative design of QRNGs based on thedynamic adjustment of the uniformity of the produced random sequence, which is relevant forthe construction of modern generators that rely on independent real-time testing of itsperformance.
@article{diva2:1656618,
author = {Argillander, Joakim and Alarcon, Alvaro and Xavier, Guilherme B.},
title = {{A tunable quantum random number generator based on a fiber-optical Sagnac interferometer}},
journal = {Journal of Optics},
year = {2022},
volume = {24},
number = {6},
}
Organic electronic circuits based on organic electrochemical transistors (OECTs) are attracting great attention due to their printability, flexibility, and low voltage operation. Inverters are the building blocks of digital logic circuits (e.g., NAND gates) and analog circuits (e.g., amplifiers). However, utilizing OECTs in electronic logic circuits is challenging due to the resulting low voltage gain and low output voltage levels. Hence, inverters capable of operating at relatively low supply voltages, yet offering high voltage gain and larger output voltage windows than the respective input voltage window are desired. Herein, inverters realized from poly(3,4-ethylenedioxythiophene):polystyrene sulfonate-based OECTs are designed and explored, resulting in logic inverters exhibiting high voltage gains, enlarged output voltage windows, and tunable switching points. The inverter designs are based on multiple screen-printed OECTs and a resistor ladder, where one OECT is the driving transistor while one or two additional OECTs are used as variable resistors in the resistor ladder. The inverters performances are investigated in terms of voltage gain, output voltage levels, and switching point. Inverters, operating at +/-2.5 V supply voltage and an input voltage window of 1 V, that can achieve an output voltage window with similar to 110% increment and a voltage gain up to 42 are demonstrated.
@article{diva2:1654209,
author = {Zabihipour, Marzieh and Tu, Deyu and Forchheimer, Robert and Strandberg, Jan and Berggren, Magnus and Engquist, Isak and Ersman, Peter Andersson},
title = {{High-Gain Logic Inverters based on Multiple Screen-Printed Organic Electrochemical Transistors}},
journal = {Advanced Materials Technologies},
year = {2022},
volume = {7},
number = {10},
}
We report on an electronic coincidence detection circuit for quantum photonic applications implemented on a field-programmable gate array (FPGA), which records each the time separation between detection events coming from single-photon detectors. We achieve a coincidence window as narrow as 500 ps with a series of optimizations on a readily-available and affordable FPGA development board. Our implementation allows real-time visualization of coincidence measurements for multiple coincidence window widths simultaneously. To demonstrate the advantage of our high-resolution visualization, we certified the generation of polarized entangled photons by collecting data from multiple coincidence windows with minimal accidental counts, obtaining a violation of the Clauser-Horne-Shimony-Holt (CHSH) Bell inequality by more than 338 standard deviations. Our results have shown the applicability of our electronic design in the field of quantum information.
@article{diva2:1602339,
author = {Carine, Jaime and Gomez, Santiago A. and Obregon, Giannini F. and Gomez, Esteban S. and Figueroa, Miguel and Lima, Gustavo and Xavier, Guilherme B},
title = {{Post-Measurement Adjustment of the Coincidence Window in Quantum Optics Experiments}},
journal = {IEEE Access},
year = {2021},
volume = {9},
pages = {94010--94016},
}
A natural choice for quantum communication is to use the relative phase between two paths of a single photon for information encoding. This method was nevertheless quickly identified as impractical over long distances, and thus a modification based on single-photon time bins has become widely adopted. It, how-ever, introduces a fundamental loss, which increases with the dimension and limits its application over long distances. Here solve this long-standing hurdle by using a few-mode-fiber space-division-multiplexing platform working with orbital-angular-momentum modes. In our scheme, we maintain the practicability provided by the time-bin scheme, while the quantum states are transmitted through a few-mode fiber in a configuration that does not introduce postselection losses. We experimentally demonstrate our proposal by successfully transmitting phase-encoded single-photon states for quantum cryptography over 500 m of few-mode fiber, showing the feasibility of our scheme.
@article{diva2:1600986,
author = {Alarcon, Alvaro and Argillander, Joakim and Lima, G. and Xavier, Guilherme B},
title = {{Few-Mode-Fiber Technology Fine-tunes Losses in Quantum Communication Systems}},
journal = {Physical Review Applied},
year = {2021},
volume = {16},
number = {3},
}
To further develop a low-power low-cost optical motion detector for use with traffic detection under dark and daylight conditions, we have developed and verified a procedure to use a Near Sensor Image Processing (NSIP) programmable 2-D optical sensor in a "1-D mode" to achieve the effect of using a cylindrical lens, thus improving the angle-of-view (AOV), the sensitivity, and usefulness of the sensor. Using an existing 256 x 256 element sensor in an innovative way, the AOV was increased from 0.4 to 21.3 in the vertical direction while also improving the sensitivity. The details of the sensor hardware architecture are described in detail and pseudo code for programming the sensor is discussed. The results were used to demonstrate the extraction of Local Extreme Points (LEPs) used for Time-To-Impact (TTI) calculations to estimate the speed of an approaching vehicle.
@article{diva2:1599459,
author = {Johansson, Ted and Forchheimer, Robert and Aström, Anders},
title = {{Improving angle-of-view for a 1-D sensing application by using a 2-D optical sensor in "cylindrical" mode}},
journal = {IEEE Sensors Letters},
year = {2021},
volume = {5},
number = {10},
}
In this paper, we suggest basing the development of classification methods on traditional techniques but approximating them in whole or in part with artificial neural networks (ANNs). Compared to a direct ANN approach, the underlying traditional method is often easier to analyse. Classification failures can be better understood and corrected for, while at the same time faster execution can be obtained due to the parallel ANN structure. Furthermore, such a two-step design philosophy partly eliminates the guesswork associated with the design of ANNs. The expected gain is thus that the benefits from the traditional field, and the ANN field can be obtained by combining the best features from both fields. We illustrate our approach by working through an explicit example, namely, a Nearest Neighbour classifier applied to a subset of the MNIST database of handwritten digits. Two different approaches are discussed for how to translate the traditional method into an ANN. The first approach is based on a constructive implementation which directly reflects the original algorithm. The second approach uses ANNs to approximate the whole, or part of, the original method. An important part of the approach is to show how improvements can be introduced. In line with the presented philosophy, this is done by extending the traditional method in several ways followed by ANN approximation of the modified algorithms. The extensions are based on a windowed version of the nearest neighbour algorithm. We show that the improvements carry over to the ANN implementations. We further investigate the stability of the solutions by modifying the training set. It is shown that the errors do not change significantly. This also holds true for the ANN approximations providing confidence that the two-step strategy is robust.
@article{diva2:1543796,
author = {Kruglyak, Natan and Forchheimer, Robert},
title = {{Design of classifiers based on ANN approximations of traditional methods}},
journal = {International journal of circuit theory and applications},
year = {2021},
volume = {49},
number = {7},
pages = {1916--1931},
}
Device and semi-device-independent private quantum randomness generators are crucial for applications requiring private randomness. However, they are vulnerable to detection inefficiency attacks and this limits severely their usage for practical purposes. Here, we present a method for protecting semi-device-independent private quantum randomness generators in prepare-and-measure scenarios against detection inefficiency attacks. The key idea is the introduction of a blocking device that adds failures in the communication between the preparation and measurement devices. We prove that, for any detection efficiency, there is a blocking rate that provides protection against these attacks. We experimentally demonstrate the generation of private randomness using weak coherent states and standard avalanche photo-detectors.
@article{diva2:1529986,
author = {Mironowicz, Piotr and Canas, Gustavo and Carine, Jaime and Gomez, Esteban S. and Barra, Johanna F. and Cabello, Adan and Xavier, Guilherme B and Lima, Gustavo and Pawlowski, Marcin},
title = {{Quantum randomness protected against detection loophole attacks}},
journal = {Quantum Information Processing},
year = {2021},
volume = {20},
number = {1},
}
The optical fibre is an essential tool for our communication infrastructure since it is the main transmission channel for optical communications. The latest major advance in optical fibre technology is space-division multiplexing, where new fibre designs and components establish multiple co-existing data channels based on light propagation over distinct transverse optical modes. Simultaneously, there have been many recent developments in the field of quantum information processing, with novel protocols and devices in areas such as computing and communication. Here, we review recent results in quantum information based on space-division multiplexing optical fibres, and discuss new possibilities based on this technology.
@article{diva2:1600311,
author = {Xavier, Guilherme B. and Lima, Gustavo},
title = {{Quantum information processing with space-division multiplexing optical fibres}},
journal = {Communications Physics},
year = {2020},
volume = {3},
number = {1},
}
An essential component of future quantum networks is an optical switch capable of dynamically routing single photons. Here we implement such a switch, based on a fiber-optical Sagnac interferometer design. The routing is implemented with a pair of fast electro-optical telecom phase modulators placed inside the Sagnac loop, such that each modulator acts on an orthogonal polarization component of the single photons, in order to yield polarization-independent capability that is crucial for several applications. We obtain an average extinction ratio of more than 19 dB between both outputs of the switch. Our experiment is built exclusively with commercial off-the-shelf components, thus allowing direct compatibility with current optical communication systems. (C) 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
@article{diva2:1510440,
author = {Alarcon, Alvaro and Gonzalez, P. and Carine, J. and Lima, G. and Xavier, Guilherme B},
title = {{Polarization-independent single-photon switch based on a fiber-optical Sagnac interferometer for quantum communication networks}},
journal = {Optics Express},
year = {2020},
volume = {28},
number = {22},
pages = {33731--33738},
}
A CMOS sensor chip was used, together with an Arduino microcontroller, to create and verify a low-power low-cost optical motion detector for use in traffic detection under dark and daylight conditions. The chip can sense object features with very high dynamic range. On-chip near sensor image processing was used to reduce the data to be transferred to a host computer. A method using local extrema point detection was used to estimate motion through time-to-impact (TTI). Sensor data from the headlights of an approaching/passing car were used to extract TTI values similar to estimations from distance and speed of the object. The method can be used for detection of approaching objects to switch on streetlights (dark conditions) or sensors for traffic lights instead of magnetic sensors in the streets or conventional cameras (dark and daylight conditions). A sensor with a microcontroller operating at low clock frequency will consume less than 30 mW in this application.
@article{diva2:1463487,
author = {Johansson, Ted and Forchheimer, Robert and Åstrom, Anders},
title = {{Low-Power Optical Sensor for Traffic Detection}},
journal = {IEEE Sensors Letters},
year = {2020},
volume = {4},
number = {5},
}
Here, we report all-screen printed display driver circuits, based on organic electrochemical transistors (OECTs), and their monolithic integration with organic electrochromic displays (OECDs). Both OECTs and OECDs operate at low voltages and have similar device architectures, and, notably, they rely on the very same electroactive material as well as on the same electrochemical switching mechanism. This then allows us to manufacture OECT-OECD circuits in a concurrent manufacturing process entirely based on screen printing methods. By taking advantage of the high current throughput capability of OECTs, we further demonstrate their ability to control the light emission in traditional light-emitting diodes (LEDs), where the actual LED addressing is achieved by an OECT-based decoder circuit. The possibility to monolithically integrate all-screen printed OECTs and OECDs on flexible plastic foils paves the way for distributed smart sensor labels and similar Internet of Things applications.
@article{diva2:1461954,
author = {Andersson Ersman, Peter and Zabihipour, Marzieh and Tu, Deyu and Lassnig, Roman and Strandberg, Jan and Åhlin, Jessica and Nilsson, Marie and Westerberg, David and Gustafsson, Göran and Berggren, Magnus and Forchheimer, Robert and Fabiano, Simone},
title = {{Monolithic integration of display driver circuits and displays manufactured by screen printing}},
journal = {Flexible and Printed Electronics},
year = {2020},
volume = {5},
number = {2},
}
Multi-port beam splitters are cornerstone devices for high-dimensional quantum information tasks, which can outperform the two-dimensional ones. Nonetheless, the fabrication of such devices has proven to be challenging with progress only recently achieved with the advent of integrated photonics. Here, we report on the production of high-quality N x N (with N = 4, 7) multi-port beam splitters based on a new scheme for manipulating multi-core optical fibers. By exploring their compatibility with optical fiber components, we create four-dimensional quantum systems and implement the measurement-device-independent random number generation task with a programmable four-arm interferometer operating at a 2 MHz repetition rate. Due to the high visibilities observed, we surpass the one-bit limit of binary protocols and attain 1.23 bits of certified private randomness per experimental round. Our result demonstrates that fast switching, low loss, and high optical quality for high-dimensional quantum information can be simultaneously achieved with multi-core fiber technology. (C) 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
@article{diva2:1444135,
author = {Carine, J. and Canas, G. and Skrzypczyk, P. and Supic, I and Guerrero, N. and Garcia, T. and Pereira, L. and Prosser, M. A. S. and Xavier, Guilherme B and Delgado, A. and Walborn, S. P. and Cavalcanti, D. and Lima, G.},
title = {{Multi-core fiber integrated multi-port beam splitters for quantum information processing}},
journal = {Optica},
year = {2020},
volume = {7},
number = {5},
pages = {542--550},
}
Quantum key distribution (QKD) is regarded as an alternative to traditional cryptography methods for securing data communication by quantum mechanics rather than computational complexity. Towards the massive deployment of QKD, embedding it with the telecommunication system is crucially important. Homogenous optical multi-core fibers (MCFs) compatible with spatial division multiplexing (SDM) are essential components for the next-generation optical communication infrastructure, which provides a big potential for co-existence of optical telecommunication systems and QKD. However, the QKD channel is extremely vulnerable due to the fact that the quantum states can be annihilated by noise during signal propagation. Thus, investigation of telecom compatibility for QKD co-existing with high-speed classical communication in SDM transmission media is needed. In this paper, we present analytical models of the noise sources in QKD links over heterogeneous MCFs. Spontaneous Raman scattering and inter-core crosstalk are experimentally characterized over spans of MCFs with different refractive index profiles, emulating shared telecom traffic conditions. Lower bounds for the secret key rates and quantum bit error rate (QBER) due to different core/wavelength allocation are obtained to validate intra- and inter-core co-existence of QKD and classical telecommunication.
@article{diva2:1434885,
author = {Lin, Rui and Udalcovs, Aleksejs and Ozolins, Oskars and Pang, Xiaodan and Gan, Lin and Tang, Ming and Fu, Songnian and Popov, Serge and Da Silva, Thiago Ferreira and Xavier, Guilherme B and Chen, Jiajia},
title = {{Telecommunication Compatibility Evaluation for Co-existing Quantum Key Distribution in Homogenous Multicore Fiber}},
journal = {IEEE Access},
year = {2020},
volume = {8},
pages = {78836--78846},
}
The communication outposts of the emerging Internet of Things are embodied by ordinary items, which desirably include all-printed flexible sensors, actuators, displays and akin organic electronic interface devices in combination with silicon-based digital signal processing and communication technologies. However, hybrid integration of smart electronic labels is partly hampered due to a lack of technology that (de)multiplex signals between silicon chips and printed electronic devices. Here, we report all-printed 4-to-7 decoders and seven-bit shift registers, including over 100 organic electrochemical transistors each, thus minimizing the number of terminals required to drive monolithically integrated all-printed electrochromic displays. These relatively advanced circuits are enabled by a reduction of the transistor footprint, an effort which includes several further developments of materials and screen printing processes. Our findings demonstrate that digital circuits based on organic electrochemical transistors (OECTs) provide a unique bridge between all-printed organic electronics (OEs) and low-cost silicon chip technology for Internet of Things applications.
@article{diva2:1374076,
author = {Andersson Ersman, Peter and Lassnig, Roman and Strandberg, Jan and Tu, Deyu and Keshmiri, Vahid and Forchheimer, Robert and Fabiano, Simone and Gustafsson, Goran and Berggren, Magnus},
title = {{All-printed large-scale integrated circuits based on organic electrochemical transistors}},
journal = {Nature Communications},
year = {2019},
volume = {10},
}
Currently, new business models can be observed in content creator-based e-commerce. The research on e-commerce has grown rapidly and new concepts have emerged such as social commerce, platforms, and user-generated content. However, no overarching perspective has yet been formulated for distinguishing new content creator-based business models within e-commerce. The aim of this paper is therefore to characterize content creator-based business models by formulating a taxonomy of e-commerce based on a structured literature review of the concepts mentioned above. The results of our study point toward eight types of content creator-based business models. Our paper outlines theoretical and practical implications for the emerging phenomenon of content creator-based business, which we refer to as intellectual commerce. In addition, we describe 19 concepts related to Web 1.0, Web 2.0, and e-commerce.
@article{diva2:1369466,
author = {Mileros, Martin Daniel and Lakemond, Nicolette and Forchheimer, Robert},
title = {{Towards a Taxonomy of E-commerce:
Characterizing Content Creator-Based Business Models}},
journal = {Technology Innovation Management Review},
year = {2019},
volume = {9},
number = {10},
pages = {62--77},
}
Query complexity is a common tool for comparing quantum and classical computation, and it has produced many examples of how quantum algorithms differ from classical ones. Here we investigate in detail the role that oracles play for the advantage of quantum algorithms. We do so by using a simulation framework, Quantum Simulation Logic (QSL), to construct oracles and algorithms that solve some problems with the same success probability and number of queries as the quantum algorithms. The framework can be simulated using only classical resources at a constant overhead as compared to the quantum resources used in quantum computation. Our results clarify the assumptions made and the conditions needed when using quantum oracles. Using the same assumptions on oracles within the simulation framework we show that for some specific algorithms, such as the Deutsch-Jozsa and Simons algorithms, there simply is no advantage in terms of query complexity. This does not detract from the fact that quantum query complexity provides examples of how a quantum computer can be expected to behave, which in turn has proved useful for finding new quantum algorithms outside of the oracle paradigm, where the most prominent example is Shors algorithm for integer factorization.
@article{diva2:1353454,
author = {Johansson, Niklas and Larsson, Jan-Åke},
title = {{Quantum Simulation Logic, Oracles, and the Quantum Advantage}},
journal = {Entropy},
year = {2019},
volume = {21},
number = {8},
}
An evolvable organic electrochemical transistor (OECT), operating in the hybrid accumulation-depletion mode is reported, which exhibits short-term and long-term memory functionalities. The transistor channel, formed by an electropolymerized conducting polymer, can be formed, modulated, and obliterated in situ and under operation. Enduring changes in channel conductance, analogous to long-term potentiation and depression, are attained by electropolymerization and electrochemical overoxidation of the channel material, respectively. Transient changes in channel conductance, analogous to short-term potentiation and depression, are accomplished by inducing nonequilibrium doping states within the transistor channel. By manipulating the input signal, the strength of the transistor response to a given stimulus can be modulated within a range that spans several orders of magnitude, producing behavior that is directly comparable to short- and long-term neuroplasticity. The evolvable transistor is further incorporated into a simple circuit that mimics classical conditioning. It is forecasted that OECTs that can be physically and electronically modulated under operation will bring about a new paradigm of machine learning based on evolvable organic electronics.
@article{diva2:1315874,
author = {Gerasimov, Jennifer and Karlsson, Roger H and Forchheimer, Robert and Stavrinidou, Eleni and Simon, Daniel T and Berggren, Magnus and Fabiano, Simone},
title = {{An Evolvable Organic Electrochemical Transistor for Neuromorphic Applications}},
journal = {Advanced Science},
year = {2019},
volume = {6},
number = {7},
}
Driving cycles are nowadays, to an increasing extent, used as input to model-based vehicle design and as training data for development of vehicle models and functions with machine learning algorithms. Recorded real driving data may underrepresent or even lack important characteristics, and therefore there is a need to complement driving cycles obtained from real driving data with synthetic data that exhibit various desired characteristics. In this paper, an efficient method for generation of mission-based driving cycles is developed for this purpose. It is based on available effective methods for traffic simulation and available maps to define driving missions. By comparing the traffic simulation results with real driving data, insufficiencies in the existing behavioral model in the utilized traffic simulation tool are identified. Based on these findings, four extensions to the behavioral model are suggested, staying within the same class of computational complexity so that it can still be used in a large scale. The evaluation results show significant improvements in the match between the data measured on the road and the outputs of the traffic simulation with the suggested extensions of the behavioral model. The achieved improvements can be observed with both visual inspection and objective measures. For instance, the 40% difference in the relative positive acceleration of the originally simulated driving cycle compared to real driving data was eliminated using the suggested model.
@article{diva2:1297517,
author = {Kharrazi, Sogol and Almen, Marcus and Frisk, Erik and Nielsen, Lars},
title = {{Extending Behavioral Models to Generate Mission-Based Driving Cycles for Data-Driven Vehicle Development}},
journal = {IEEE Transactions on Vehicular Technology},
year = {2019},
volume = {68},
number = {2},
pages = {1222--1230},
}
n/a
@article{diva2:1276264,
author = {Fedorov, Aleksey and Gerhardt, Ilja and Huang, Anqi and Jogenfors, Jonathan and Kurochkin, Yury and Lamas-Linares, Antia and Larsson, Jan-Åke and Leuchs, Gerd and Lydersen, Lars and Makarov, Vadim and Skaar, Johannes},
title = {{Comment on Inherent security of phase coding quantum key distribution systems against detector blinding attacks (vol 15, 095203, 2018)}},
journal = {Laser Physics Letters},
year = {2019},
volume = {16},
number = {1},
}
Paper is emerging as a promising flexible, high surface-area substrate for various new applications such as printed electronics, energy storage, and paper-based diagnostics. Many applications, however, require paper that reaches metallic conductivity levels, ideally at low cost. Here, an aqueous electroless copper-plating method is presented, which forms a conducting thin film of fused copper nanoparticles on the surface of the cellulose fibers. This paper can be used as a current collector for anodes of lithium-ion batteries. Owing to the porous structure and the large surface area of cellulose fibers, the copper-plated paper-based half-cell of the lithium-ion battery exhibits excellent rate performance and cycling stability, and even outperforms commercially available planar copper foil-based anode at ultra-high charge/discharge rates of 100 C and 200 C. This mechanically robust metallic-paper composite has promising applications as the current collector for light-weight, flexible, and foldable paper-based 3D Li-ion battery anodes.
@article{diva2:1273232,
author = {Wang, Zhen and Malti, Abdellah and Ouyang, Liangqi and Tu, Deyu and Tian, Weiqian and Wagberg, Lars and Hamedi, Mahiar Max},
title = {{Copper-Plated Paper for High-Performance Lithium-Ion Batteries}},
journal = {Small},
year = {2018},
volume = {14},
number = {48},
}
Entanglement is an invaluable resource for fundamental tests of physics and the implementation of quantum information protocols such as device-independent secure communications. In particular, time-bin entanglement is widely exploited to reach these purposes both in free space and optical fiber propagation, due to the robustness and simplicity of its implementation. However, all existing realizations of time-bin entanglement suffer from an intrinsic postselection loophole, which undermines their usefulness. Here, we report the first experimental violation of Bells inequality with "genuine" time-bin entanglement, free of the postselection loophole. We introduced a novel function of the interferometers at the two measurement stations, that operate as fast synchronized optical switches. This scheme allowed us to obtain a postselection-loophole-free Bell violation of more than 9 standard deviations. Since our scheme is fully implementable using standard fiber-based components and is compatible with modem integrated photonics, our results pave the way for the distribution of genuine time-bin entanglement over long distances.
@article{diva2:1267322,
author = {Vedovato, Francesco and Agnesi, Costantino and Tomasin, Marco and Avesani, Marco and Larsson, Jan-Åke and Vallone, Giuseppe and Villoresi, Paolo},
title = {{Postselection-Loophole-Free Bell Violation with Genuine Time-Bin Entanglement}},
journal = {Physical Review Letters},
year = {2018},
volume = {121},
number = {19},
}
We combine the near-sensor image processing concept with address-event representation leading to an intensity-ranking image sensor (IRIS) and show the benefits of using this type of sensor for image classification. The functionality of IRIS is to output pixel coordinates (X and Y values) continuously as each pixel has collected a certain number of photons. Thus, the pixel outputs will be automatically intensity ranked. By keeping track of the timing of these events, it is possible to record the full dynamic range of the image. However, in many cases, this is not necessary-the intensity ranking in itself gives the needed information for the task at hand. This paper describes techniques for classification and proposes a particular variant (groves) that fits the IRIS architecture well as it can work on the intensity rankings only. Simulation results using the CIFAR-10 dataset compare the results of the proposed method with the more conventional ferns technique. It is concluded that the simultaneous sensing and classification obtainable with the IRIS sensor yields both fast (shorter than full exposure time) and processing-efficient classification.
@article{diva2:1254020,
author = {Ahlberg, Jörgen and Åstrom, Anders and Forchheimer, Robert},
title = {{Simultaneous sensing, readout, and classification on an intensity-ranking image sensor}},
journal = {International journal of circuit theory and applications},
year = {2018},
volume = {46},
number = {9},
pages = {1606--1619},
}
A method to decorate cellulose-based helices retrieved from the plant celery with a conductive polymer is proposed. Using a layer-by-layer method, the decoration of the polyanionic conducting polymer poly(4-(2,3-dihydrothieno [3,4-b]-[1,4]dioxin-2-yl-methoxy)-1-butanesulfonic acid (PEDOT-S) is enhanced after coating the negatively charged cellulose helix with a polycationic polyethyleneimine. Microscopy techniques and two-point probe are used to image the structure and measure the conductivity of the helix. Analysis of the optical and electrical properties of the coated helix in the terahertz (THz) frequency range shows a resonance close to 1 THz and a broad shoulder that extends to 3.5 THz, consistent with electromagnetic models. Moreover, as helical antennas, it is shown that both axial and normal modes are present, which are correlated to the orientation and antenna electrical lengths of the coated helices. This work opens the possibility of designing tunable terahertz antennas through simple control of their dimensions and orientation.
@article{diva2:1229831,
author = {Elfwing, Anders and Ponseca, Carlito and Ouyang, Liangqi and Urbanowicz, Andrzej and Krotkus, Arunas and Tu, Deyu and Forchheimer, Robert and Inganäs, Olle},
title = {{Conducting Helical Structures from Celery Decorated with a Metallic Conjugated Polymer Give Resonances in the Terahertz Range}},
journal = {Advanced Functional Materials},
year = {2018},
volume = {28},
number = {24},
}
We report on a new class of dimension witnesses, based on quantum random access codes, which are a function of the recorded statistics and that have different bounds for all possible decompositions of a high-dimensional physical system. Thus, it certifies the dimension of the system and has the new distinct feature of identifying whether the high-dimensional system is decomposable in terms of lower dimensional subsystems. To demonstrate the practicability of this technique, we used it to experimentally certify the generation of an irreducible 1024-dimensional photonic quantum state. Therefore, certifying that the state is not multipartite or encoded using noncoupled different degrees of freedom of a single photon. Our protocol should find applications in a broad class of modern quantum information experiments addressing the generation of high-dimensional quantum systems, where quantum tomography may become intractable.
@article{diva2:1229816,
author = {Aguilar, Edgar A. and Farkas, Mate and Martinez, Daniel and Alvarado, Matias and Carine, Jaime and Xavier, Guilherme B and Barra, Johanna F. and Canas, Gustavo and Pawlowski, Marcin and Lima, Gustavo},
title = {{Certifying an Irreducible 1024-Dimensional Photonic State Using Refined Dimension Witnesses}},
journal = {Physical Review Letters},
year = {2018},
volume = {120},
number = {23},
}
We report on a new technique for entanglement distillation of the bipartite continuous variable state of spatially correlated photons generated in the spontaneous parametric down-conversion process ( SPDC), where tunable non-Gaussian operations are implemented and the post-processed entanglement is certified in real-time using a single-photon sensitive electron multiplying CCD (EMCCD) camera. The local operations are performed using non-Gaussian filters modulated into a programmable spatial light modulator and, by using the EMCCD camera for actively recording the probability distributions of the twin-photons, one has fine control of the Schmidt number of the distilled state. We show that even simple non-Gaussian filters can be finely tuned to a similar to 67% net gain of the initial entanglement generated in the SPDC process. (C) 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
@article{diva2:1219988,
author = {Gomez, E. S. and Riquelme, P. and Solis-Prosser, M. A. and Gonzalez, P. and Ortega, E. and Xavier, Guilherme B and Lima, G.},
title = {{Tunable entanglement distillation of spatially correlated down-converted photons}},
journal = {Optics Express},
year = {2018},
volume = {26},
number = {11},
pages = {13961--13972},
}
We consider buffered real-time communication over channels with time-dependent capacities which are known in advance. The real-time constraint is imposed in terms of limited transmission time between sender and receiver. For a network consisting of a single channel it is shown that there is a coding rate strategy, geometrically characterized as a taut string, which minimizes the average distortion with respect to all convex distortionrate functions. Utilizing the taut string characterization further, an algorithm that computes the optimal coding rate strategy is provided. We then consider more general networks with several connected channels in parallel or series with intermediate buffers. It is shown that also for these networks there is a coding rate strategy, geometrically characterized as a taut string, which minimizes the average distortion with respect to all convex distortion-rate functions. The optimal offline strategy provides a benchmark for the evaluation of different coding rate strategies. Further, it guides us in the construction of a simple but rather efficient strategy for channels in the online setting which alternates between a good and a bad state.
@article{diva2:1213350,
author = {Setterqvist, Eric and Forchheimer, Robert},
title = {{Real-Time Communication Systems based on Taut Strings}},
journal = {Journal of Communications and Networks},
year = {2018},
volume = {20},
number = {2},
pages = {207--218},
}
A Bell test is a randomized trial that compares experimental observations against the philosophical worldview of local realism , in which the properties of the physical world are independent of our observation of them and no signal travels faster than light. A Bell test requires spatially distributed entanglement, fast and high-efficiency detection and unpredictable measurement settings. Although technology can satisfy the first two of these requirements, the use of physical devices to choose settings in a Bell test involves making assumptions about the physics that one aims to test. Bell himself noted this weakness in using physical setting choices and argued that human 'free will' could be used rigorously to ensure unpredictability in Bell tests. Here we report a set of local-realism tests using human choices, which avoids assumptions about predictability in physics. We recruited about 100,000 human participants to play an online video game that incentivizes fast, sustained input of unpredictable selections and illustrates Bell-test methodology. The participants generated 97,347,490 binary choices, which were directed via a scalable web platform to 12 laboratories on five continents, where 13 experiments tested local realism using photons, single atoms, atomic ensembles and superconducting devices. Over a 12-hour period on 30 November 2016, participants worldwide provided a sustained data flow of over 1,000 bits per second to the experiments, which used different human-generated data to choose each measurement setting. The observed correlations strongly contradict local realism and other realistic positions in bi-partite and tri-partite 12 scenarios. Project outcomes include closing the 'freedom-of-choice loophole' (the possibility that the setting choices are influenced by 'hidden variables' to correlate with the particle properties), the utilization of video-game methods for rapid collection of human-generated randomness, and the use of networking techniques for global participation in experimental science.
@article{diva2:1209417,
author = {Abellán, C. and Acín, A. and Alarcón, A. and Alibart, O. and Andersen, C. K. and Andreoli, F. and Beckert, A. and Beduini, F. A. and Bendersky, A. and Bentivegna, M. and Bierhorst, P. and Burchardt, D. and Cabello, A. and Cariñe, J. and Carrasco, S. and Carvacho, G. and Cavalcanti, D. and Chaves, R. and Cort\'{e}s-Vega, J. and Cuevas, A. and Delgado, A. and de Riedmatten, H. and Eichler, C. and Farrera, P. and Fuenzalida, J. and García-Matos, M. and Garthoff, R. and Gasparinetti, S. and Gerrits, T. and Ghafari Jouneghani, F. and Glancy, S. and Gómez, E. S. and González, P. and Guan, J. -Y. and Handsteiner, J. and Heinsoo, J. and Heintze, G. and Hirschmann, A. and Jim\'{e}nez, O. and Kaiser, F. and Knill, E. and Knoll, L. T. and Krinner, S. and Kurpiers, P. and Larotonda, M. A. and Larsson, Jan-Åke and Lenhard, A. and Li, H. and Li, M. -H. and Lima, G. and Liu, B. and Liu, Y. and López Grande, I. H. and Lunghi, T. and Ma, X. and Magaña-Loaiza, O. S. and Magnard, P. and Magnoni, A. and Martí-Prieto, M. and Martínez, D. and Mataloni, P. and Mattar, A. and Mazzera, M. and Mirin, R. P. and Mitchell, M. W. and Nam, S. and Oppliger, M. and Pan, J. -W. and Patel, R. B. and Pryde, G. J. and Rauch, D. and Redeker, K. and Rieländer, D. and Ringbauer, M. and Roberson, T. and Rosenfeld, W. and Salath\'{e}, Y. and Santodonato, L. and Sauder, G. and Scheidl, T. and Schmiegelow, C. T. and Sciarrino, F. and Seri, A. and Shalm, L. K. and Shi, S. -C and Slussarenko, S. and Stevens, M. J. and Tanzilli, S. and Toledo, F. and Tura, J. and Ursin, R. and Vergyris, P. and Verma, V. B. and Walter, T. and Wallraff, A. and Wang, Z. and Weinfurter, H. and Weston, M. M. and White, A. G. and Wu, C. and Xavier, Guilherme B. and You, L. and Yuan, X. and Zeilinger, A. and Zhang, Q. and Zhang, W. and Zhong, J.},
title = {{Challenging Local Realism with Human Choices}},
journal = {Nature},
year = {2018},
volume = {557},
pages = {212--216},
}
Organic electrochemical transistors (OECTs) have been the subject of intense research in recent years. To date, however, most of the reported OECTs rely entirely on p-type (hole transport) operation, while electron transporting (n-type) OECTs are rare. The combination of efficient and stable p-type and n-type OECTs would allow for the development of complementary circuits, dramatically advancing the sophistication of OECT-based technologies. Poor stability in air and aqueous electrolyte media, low electron mobility, and/or a lack of electrochemical reversibility, of available high-electron affinity conjugated polymers, has made the development of n-type OECTs troublesome. Here, it is shown that ladder-type polymers such as poly(benzimidazobenzophenanthroline) (BBL) can successfully work as stable and efficient n-channel material for OECTs. These devices can be easily fabricated by means of facile spray-coating techniques. BBL-based OECTs show high transconductance (up to 9.7 mS) and excellent stability in ambient and aqueous media. It is demonstrated that BBL-based n-type OECTs can be successfully integrated with p-type OECTs to form electrochemical complementary inverters. The latter show high gains and large worst-case noise margin at a supply voltage below 0.6 V.
@article{diva2:1192027,
author = {Sun, Hengda and Vagin, Mikhail and Wang, Suhao and Crispin, Xavier and Forchheimer, Robert and Berggren, Magnus and Fabiano, Simone},
title = {{Complementary Logic Circuits Based on High-Performance n-Type Organic Electrochemical Transistors}},
journal = {Advanced Materials},
year = {2018},
volume = {30},
number = {9},
}
In this work, we report macroscopic electromagnetic devices made from conducting polymers. We compare their fundamental properties and device parameters with those of similar devices made from copper wires. By using self-standing supra-ampere conducting polymer wires, we are able to manufacture inductors that generate magnetic fields well over 1 G, and incorporate them in feedback LC oscillators operating at 8.65 MHz. Moreover, by utilizing the unique electrochemical functionality of conducting polymers, we demonstrate electrochemically-tunable electromagnets and electromagnetic chemical sensors. Our findings pave the way to lightweight electromagnetic technologies that can be processed (from water dispersions) using low-temperature protocols into flexible shapes and geometries. © 2017 Elsevier B.V.
@article{diva2:1463482,
author = {Malti, Abdellah and Tu, Deyu and Edberg, Jesper and Abdollahi Sani, Negar and Rudd, S. and Evans, D. and Forchheimer, Robert},
title = {{Electromagnetic devices from conducting polymers}},
journal = {Organic electronics},
year = {2017},
volume = {50},
pages = {304--310},
}
Vertical organic electrochemical transistors (OECTs) have been manufactured solely using screen printing. The OECTs are based on PEDOT:PSS (poly(3,4-ethylenedioxythiophene) doped with poly (styrene sulfonic acid)), which defines the active material for both the transistor channel and the gate electrode. The resulting vertical OECT devices and circuits exhibit low-voltage operation, relatively fast switching, small footprint and high manufacturing yield; the last three parameters are explained by the reliance of the transistor configuration on a robust structure in which the electrolyte vertically bridges the bottom channel and the top gate electrode. Two different architectures of the vertical OECT have been manufactured, characterized and evaluated in parallel throughout this report. In addition to the experimental work, SPICE models enabling simulations of standalone OECTs and OECT-based circuits have been developed. Our findings may pave the way for fully integrated, low-voltage operating and printed signal processing systems integrated with e.g. printed batteries, solar cells, sensors and communication interfaces. Such technology can then serve a low-cost base technology for the internet of things, smart packaging and home diagnostics applications.
@article{diva2:1362793,
author = {Andersson Ersman, Peter and Westerberg, David and Tu, Deyu and Nilsson, Marie and Åhlin, Jessica and Eveborn, Annelie and Lagerlöf, Axel and Nilsson, David and Sandberg, Mats and Norberg, Petronella and Berggren, Magnus and Forchheimer, Robert and Gustafsson, Göran},
title = {{Screen printed digital circuits based on vertical organic electrochemical transistors}},
journal = {Flexible and Printed Electronics},
year = {2017},
volume = {2},
number = {4},
}
A 2-D device model of the organic electrochemical transistor is described and validated. Devices with channel length in range 100 nm-10 mm and channel thickness in range 50 nm-5 mu m are modeled. Steady-state, transient, and AC simulations are presented. Using the realistic values of physical parameters, the results are in good agreement with the experiments. The scaling of transconductance, bulk capacitance, and transient responses with device dimensions is well reproduced. The model reveals the important role of the electrical double layers in the channel, and the limitations of device scaling.
@article{diva2:1170023,
author = {Szymanski, Marek and Tu, Deyu and Forchheimer, Robert},
title = {{2-D Drift-Diffusion Simulation of Organic Electrochemical Transistors}},
journal = {IEEE Transactions on Electron Devices},
year = {2017},
volume = {64},
number = {12},
pages = {5114--5120},
}
Along-standing aim of quantum information research is to understand what gives quantum computers their advantage. This requires separating problems that need genuinely quantum resources from those for which classical resources are enough. Two examples of quantum speed-up are the Deutsch-Jozsa and Simons problem, both efficiently solvable on a quantum Turing machine, and both believed to lack efficient classical solutions. Here we present a framework that can simulate both quantum algorithms efficiently, solving the Deutsch-Jozsa problem with probability 1 using only one oracle query, and Simons problem using linearly many oracle queries, just as expected of an ideal quantum computer. The presented simulation framework is in turn efficiently simulatable in a classical probabilistic Turing machine. This shows that the Deutsch-Jozsa and Simons problem do not require any genuinely quantum resources, and that the quantum algorithms show no speed-up when compared with their corresponding classical simulation. Finally, this gives insight into what properties are needed in the two algorithms and calls for further study of oracle separation between quantum and classical computation.
@article{diva2:1140144,
author = {Johansson, Niklas and Larsson, Jan-Åke},
title = {{Efficient classical simulation of the Deutsch-Jozsa and Simons algorithms}},
journal = {Quantum Information Processing},
year = {2017},
volume = {16},
number = {9},
}
In any Bell test, loopholes can cause issues in the interpretation of the results, since an apparent violation of the inequality may not correspond to a violation of local realism. An important example is the coincidence-time loophole that arises when detector settings might influence the time when detection will occur. This effect can be observed in many experiments where measurement outcomes are to be compared between remote stations because the interpretation of an ostensible Bell violation strongly depends on the method used to decide coincidence. The coincidence-time loophole has previously been studied for the Clauser-Horne-Shimony-Holt and Clauser-Horne inequalities, but recent experiments have shown the need for a generalization. Here, we study the generalized chained inequality by Pearle, Braunstein, and Caves (PBC) with N amp;gt;= 2 settings per observer. This inequality has applications in, for instance, quantum key distribution where it has been used to reestablish security. In this paper we give the minimum coincidence probability for the PBC inequality for all N amp;gt;= 2 and show that this bound is tight for a violation free of the fair-coincidence assumption. Thus, if an experiment has a coincidence probability exceeding the critical value derived here, the coincidence-time loophole is eliminated.
@article{diva2:1135472,
author = {Jogenfors, Jonathan and Larsson, Jan-Åke},
title = {{Tight bounds for the Pearle-Braunstein-Caves chained inequality without the fair-coincidence assumption}},
journal = {Physical Review A: covering atomic, molecular, and optical physics and quantum information},
year = {2017},
volume = {96},
number = {2},
}
The design, fabrication and operation of a range of functional power converter circuits, based on diode configured organic field-effect transistors as the rectifying unit and capable of transforming a high AC input voltage to a selectable DC voltage, are presented. The converter functionality is demonstrated by selecting and tuning its constituents so that it can effectively drive a low-voltage organic electronic device, a light-emitting electrochemical cell (LEC), when connected to high-voltage AC mains. It is established that the preferred converter circuit for this task comprises an organic full-wave rectifier and a regulation resistor but is void of a smoothing capacitor, and that such a circuit connected to the AC mains (230 V, 50 Hz) successfully can drive an LEC to bright luminance (360 cd m(-2)) and high efficiency (6.4 cd A(-1)). (C) 2017 Elsevier B.V. All rights reserved.
@article{diva2:1109392,
author = {Larsen, Christian and Forchheimer, Robert and Edman, Ludvig and Tu, Deyu},
title = {{Design, fabrication and application of organic power converters: Driving light-emitting electrochemical cells from the AC mains}},
journal = {Organic electronics},
year = {2017},
volume = {45},
pages = {57--64},
}
The violation of Bells inequality requires a well-designed experiment to validate the result. In experiments using energy-time and time-bin entanglement, initially proposed by Franson in 1989, there is an intrinsic loophole due to the high postselection. To obtain a violation in this type of experiment, a chained Bell inequality must be used. However, the local realism bound requires a high visibility in excess of 94.63% in the time-bin entangled state. In this work, we show how such a high visibility can be reached in order to violate a chained Bell inequality with six, eight, and ten terms.
@article{diva2:1089969,
author = {Tomasin, Marco and Mantoan, Elia and Jogenfors, Jonathan and Vallone, Giuseppe and Larsson, Jan-Åke and Villoresi, Paolo},
title = {{High-visibility time-bin entanglement for testing chained Bell inequalities}},
journal = {Physical Review A},
year = {2017},
volume = {95},
number = {3},
}
Cell voltage equalizers are an important part in electric energy storage systems comprising series-connected cells, for example, supercapacitors. Hybrid electronics with silicon chips and printed devices enables electronic systems with moderate performance and low cost. This paper presents a silicon-organic hybrid voltage equalizer to balance and protect series-connected supercapacitor cells during charging. Printed organic electrochemical transistors with conducting polymer poly(3,4-ethylenedioxythiophene): poly(styrene sulfonate) (PEDOT: PSS) are utilized to bypass excess current when the supercapacitor cells are fully charged to desired voltages. In this study, low-cost silicon microcontrollers (ATtiny85) are programmed to sense voltages across the supercapacitor cells and control the organic electrochemical transistors to bypass charging current when the voltages exceed 1 V. Experimental results show that the hybrid equalizer with the organic electrochemical transistors works in dual-mode, switched-transistor mode or constant-resistor mode, depending on the charging current applied (0.3-100 mA). With the voltage equalizer, capacitors are charged equally regardless of their capacitances. This work demonstrates a low-cost hybrid solution for supercapacitor balancing modules at large-scale packs.
@article{diva2:1089853,
author = {Keshmiri, Vahid and Westerberg, David and Andersson Ersman, Peter and Sandberg, Mats and Forchheimer, Robert and Tu, Deyu},
title = {{A Silicon-Organic Hybrid Voltage Equalizer for Supercapacitor Balancing}},
journal = {IEEE Journal on Emerging and Selected Topics in Circuits and Systems},
year = {2017},
volume = {7},
number = {1},
pages = {114--122},
}
The interpretation of quantum theory is one of the longest-standing debates in physics. Type I interpretations see quantum probabilities as determined by intrinsic properties of the observed system. Type II see them as relational experiences between an observer and the system. It is usually believed that a decision between these two options cannot be made simply on purely physical grounds but requires an act of metaphysical judgment. Here we show that, under some assumptions, the problem is decidable using thermodynamics. We prove that type I interpretations are incompatible with the following assumptions: (i) The choice of which measurement is performed can be made randomly and independently of the system under observation, (ii) the system has limited memory, and (iii) Landauers erasure principle holds.
@article{diva2:1057488,
author = {Cabello, Adan and Gu, Mile and Guehne, Otfried and Larsson, Jan-Åke and Wiesner, Karoline},
title = {{Thermodynamical cost of some interpretations of quantum theory}},
journal = {PHYSICAL REVIEW A},
year = {2016},
volume = {94},
number = {5},
}
Experimental violations of Bell inequalities are in general vulnerable to so-called loopholes. In this work, we analyze the characteristics of a loophole-free Bell test with photons, closing simultaneously the locality, freedom-of-choice, fair-sampling (i.e., detection), coincidence-time, and memory loopholes. We pay special attention to the effect of excess predictability in the setting choices due to nonideal random-number generators. We discuss necessary adaptations of the Clauser-Horne and Eberhard inequality when using such imperfect devices and-using Hoeffdings inequality and Doobs optional stopping theorem-the statistical analysis in such Bell tests.
@article{diva2:917725,
author = {Kofler, Johannes and Giustina, Marissa and Larsson, Jan-Åke and Mitchell, Morgan W.},
title = {{Requirements for a loophole-free photonic Bell test using imperfect setting generators}},
journal = {Physical Review A. Atomic, Molecular, and Optical Physics},
year = {2016},
volume = {93},
number = {3},
pages = {032115--},
}
Everyday experience supports the existence of physical properties independent of observation in strong contrast to the predictions of quantum theory. In particular, the existence of physical properties that are independent of the measurement context is prohibited for certain quantum systems. This property is known as contextuality. This Rapid Communication studies whether the process of decay in space-time generally destroys the ability of revealing contextuality. We find that in the most general situation the decay property does not diminish this ability. However, applying certain constraints due to the space-time structure either on the time evolution of the decaying system or on the measurement procedure, the criteria revealing contextuality become inherently dependent on the decay property or an impossibility. In particular, we derive how the context-revealing setup known as Bells nonlocality tests changes for decaying quantum systems. Our findings illustrate the interdependence between hidden and local hidden parameter theories and the role of time.
@article{diva2:917194,
author = {Hiesmayr, Beatrix C. and Larsson, Jan-Åke},
title = {{Contextuality and nonlocality in decaying multipartite systems}},
journal = {Physical Review A. Atomic, Molecular, and Optical Physics},
year = {2016},
volume = {93},
number = {2},
pages = {020106(R)--},
}
A simplified model is developed to understand the field and potential distribution through devices based on a ferroelectric film in direct contact with an electrolyte. Devices based on the ferroelectric polymer polyvinylidenefluoride-trifluoroethylene (PVDF-TrFE) were produced – in metalferroelectric-metal, metal-ferroelectric-dielectric-metal, and metal-ferroelectric-electrolyte-metal architectures – and used to test the model, and simulations based on the model and these fabricated devices were performed. From these simulations we find indication of progressive polarization of the films. Furthermore, the model implies that there is a relation between the separation of charge within the devices and the observed open circuit voltage. This relation is confirmed experimentally. The ability to polarize ferroelectric polymer films through aqueous electrolytes, combined with the strong correlation between the properties of the electrolyte double layer and the device potential, opens the door to a variety of new applications for ferroelectric technologies, e.g., regulation of cell culture growth and release, steering molecular self-assembly, or other large area applications requiring aqueous environments.
@article{diva2:859435,
author = {Toss, Henrik and Sani, Negar and Fabiano, Simone and Simon, Daniel T and Forchheimer, Robert and Berggren, Magnus},
title = {{Polarization of ferroelectric films through electrolyte}},
journal = {Journal of Physics},
year = {2016},
volume = {28},
number = {10},
}
We demonstrate how adversaries with unbounded computing resources can break Quantum Key Distribution (QKD) protocols which employ a particular message authentication code suggested previously. This authentication code, featuring low key consumption, is not Information-Theoretically Secure (ITS) since for each message the eavesdropper has intercepted she is able to send a different message from a set of messages that she can calculate by finding collisions of a cryptographic hash function. However, when this authentication code was introduced it was shown to prevent straightforward Man-In-The-Middle (MITM) attacks against QKD protocols.
In this paper, we prove that the set of messages that collide with any given message under this authentication code contains with high probability a message that has small Hamming distance to any other given message. Based on this fact we present extended MITM attacks against different versions of BB84 QKD protocols using the addressed authentication code; for three protocols we describe every single action taken by the adversary. For all protocols the adversary can obtain complete knowledge of the key, and for most protocols her success probability in doing so approaches unity.
Since the attacks work against all authentication methods which allow to calculate colliding messages, the underlying building blocks of the presented attacks expose the potential pitfalls arising as a consequence of non-ITS authentication in QKDpostprocessing. We propose countermeasures, increasing the eavesdroppers demand for computational power, and also prove necessary and sufficient conditions for upgrading the discussed authentication code to the ITS level.
@article{diva2:616697,
author = {Pacher, Christoph and Abidin, Aysajan and Lorünser, Thomas and Peev, Momtchil and Ursin, Rupert and Zeilinger, Anton and Larsson, Jan-Åke},
title = {{Attacks on quantum key distribution protocols that employ non-ITS authentication}},
journal = {Quantum Information Processing},
year = {2016},
volume = {15},
number = {1},
pages = {327--362},
}
In this paper, we present a consistent model to analyzethe drain current mismatch of organic thin-film transistors.The model takes charge fluctuations and edge effects into account,to predict the fluctuations of drain currents. A Poisson distributionfor the number of charge carriers is assumed to represent therandom distribution of charge carriers in the channel. The edge effectsdue to geometric variations in fabrication processes are interpretedin terms of the fluctuations of channel length and width. Thesimulation results are corroborated by experimental results takenfrom over 80 organic transistors on a flexible plastic substrate.
@article{diva2:925847,
author = {Tu, Deyu and Takimiya, Kazuo and Zschieschang, Ute and Klauk, Hagen and Forchheimer, Robert},
title = {{Modeling of Drain Current Mismatch in OrganicThin-Film Transistors}},
journal = {IEEE/OSA Journal of Display Technology},
year = {2015},
volume = {11},
pages = {559--563},
}
Local realism is the worldview in which physical properties of objects exist independently of measurement and where physical influences cannot travel faster than the speed of light. Bells theorem states that this worldview is incompatible with the predictions of quantum mechanics, as is expressed in Bells inequalities. Previous experiments convincingly supported the quantum predictions. Yet, every experiment requires assumptions that provide loopholes for a local realist explanation. Here, we report a Bell test that closes the most significant of these loopholes simultaneously. Using a well-optimized source of entangled photons, rapid setting generation, and highly efficient superconducting detectors, we observe a violation of a Bell inequality with high statistical significance. The purely statistical probability of our results to occur under local realism does not exceed 3.74 x 10(-31), corresponding to an 11.5 standard deviation effect.
@article{diva2:892984,
author = {Giustina, Marissa and Versteegh, Marijn A. M. and Wengerowsky, Soeren and Handsteiner, Johannes and Hochrainer, Armin and Phelan, Kevin and Steinlechner, Fabian and Kofler, Johannes and Larsson, Jan-Åke and Abellan, Carlos and Amaya, Waldimar and Pruneri, Valerio and Mitchell, Morgan W. and Beyer, Joern and Gerrits, Thomas and Lita, Adriana E. and Shalm, Lynden K. and Woo Nam, Sae and Scheidl, Thomas and Ursin, Rupert and Wittmann, Bernhard and Zeilinger, Anton},
title = {{Significant-Loophole-Free Test of Bells Theorem with Entangled Photons}},
journal = {Physical Review Letters},
year = {2015},
volume = {115},
number = {25},
pages = {250401--},
}
Elastic optical networking (EON) with space-division multiplexing (SDM) is the only evident long-term solution to the capacity needs of the future networks. The introduction of space via spatial fibers, such as multicore fibers (MCF) to EON provides an additional dimension as well as challenges to the network planning and resource optimization problem. There are various types of technologies for SDM transmission medium, switching, and amplification; each of them induces different capabilities and constraints on the network. For example, employing MCF as the transmission medium for SDM mitigates the spectrum continuity constraint of the routing and spectrum allocation problem for EON. In fact, cores can be switched freely on different links during routing of the network traffic. On the other hand, intercore crosstalk should be taken into account while solving the resource allocation problem. In the framework of switching for elastic SDM network, the programmable architecture on demand (AoD) node (optical white box) can provide a more scalable solution with respect to the hard-wired reconfigurable optical add/drop multiplexers (ROADMs) (optical black box). This study looks into the routing, modulation, spectrum, and core allocation (RMSCA) problem for weakly-coupled MCF-based elastic SDM networks implemented through AoDs and static ROADMs. The proposed RMSCA strategies integrate the spectrum resource allocation, switching resource deployment, and physical layer impairment in terms of intercore crosstalk through a multiobjective cost function. The presented strategies perform a cross-layer optimization between the network and physical layers to compute the actual intercore crosstalk for the candidate resource solutions and are specifically tailored to fit the type of optical node deployed in the network. The aim of all these strategies is to jointly optimize the switching and spectrum resource efficiency when provisioning demands with diverse capacity requirements. Extensive simulation results demonstrate that 1) by exploiting the dense intranodal connectivity of the ROADM-based SDM network, resource efficiency and provisioned traffic volume improve significantly related to the AoD-based solution, 2) the intercore crosstalk aware strategies improve substantially the provisioned traffic volume for the AoD-based SDM network, and 3) the switching modules grows very gently for the network designed with AoD nodes related to the one with ROADMs as the traffic increases, qualifying AoD as a scalable and cost-efficient choice for future SDM networks.
@article{diva2:886288,
author = {Muhammad, Muhammad and Zervas, Georgios and Forchheimer, Robert},
title = {{Resource Allocation for Space-Division Multiplexing: Optical White Box Versus Optical Black Box Networking}},
journal = {Journal of Lightwave Technology},
year = {2015},
volume = {33},
number = {23},
pages = {4928--4941},
}
The notion of (non) contextuality pertains to sets of properties measured one subset (context) at a time. We extend this notion to include so-called inconsistently connected systems, in which the measurements of a given property in different contexts may have different distributions, due to contextual biases in experimental design or physical interactions (signaling): a system of measurements has a maximally noncontextual description if they can be imposed a joint distribution on in which the measurements of any one property in different contexts are equal to each other with the maximal probability allowed by their different distributions. We derive necessary and sufficient conditions for the existence of such a description in a broad class of systems including Klyachko-Can-Binicioglu-Shumvosky-type (KCBS), EPR-Bell-type, and Leggett-Garg-type systems. Because these conditions allow for inconsistent connectedness, they are applicable to real experiments. We illustrate this by analyzing an experiment by Lapkiewicz and colleagues aimed at testing contextuality in a KCBS-type system.
@article{diva2:865058,
author = {Kujala, Janne V. and Dzhafarov, Ehtibar N. and Larsson, Jan-Åke},
title = {{Necessary and Sufficient Conditions for an Extended Noncontextuality in a Broad Class of Quantum Mechanical Systems}},
journal = {Physical Review Letters},
year = {2015},
volume = {115},
number = {15},
pages = {150401--},
}
Device-independent quantum communication will require a loophole-free violation of Bell inequalities. In typical scenarios where line of sight between the communicating parties is not available, it is convenient to use energy-time entangled photons due to intrinsic robustness while propagating over optical fibers. Here we show an energy-time Clauser-Horne-Shimony-Holt Bell inequality violation with two parties separated by 3.7 km over the deployed optical fiber network belonging to the University of Concepcion in Chile. Remarkably, this is the first Bell violation with spatially separated parties that is free of the postselection loophole, which affected all previous in-field long-distance energy-time experiments. Our work takes a further step towards a fiber-based loophole-free Bell test, which is highly desired for secure quantum communication due to the widespread existing telecommunication infrastructure.
@article{diva2:845599,
author = {Carvacho, Gonzalo and Carine, Jaime and Saavedra, Gabriel and Cuevas, Alvaro and Fuenzalida, Jorge and Toledo, Felipe and Figueroa, Miguel and Cabello, Adan and Larsson, Jan-Åke and Mataloni, Paolo and Lima, Gustavo and Xavier, Guilherme B.},
title = {{Postselection-Loophole-Free Bell Test Over an Installed Optical Fiber Network}},
journal = {Physical Review Letters},
year = {2015},
volume = {115},
number = {3},
pages = {030503--},
}
We present a formal theory of contextuality for a set of random variables grouped into different subsets (contexts) corresponding to different, mutually incompatible conditions. Within each context the random variables are jointly distributed, but across different contexts they are stochastically unrelated. The theory of contextuality is based on the analysis of the extent to which some of these random variables can be viewed as preserving their identity across different contexts when one considers all possible joint distributions imposed on the entire set of the random variables. We illustrate the theory on three systems of traditional interest in quantum physics (and also in non-physical, e.g., behavioral studies). These are systems of the Klyachko-Can-Binicioglu-Shumovsky-type, Einstein-Podolsky-Rosen-Bell-type, and Suppes-Zanotti-Leggett-Garg-type. Listed in this order, each of them is formally a special case of the previous one. For each of them we derive necessary and sufficient conditions for contextuality while allowing for experimental errors and contextual biases or signaling. Based on the same principles that underly these derivations we also propose a measure for the degree of contextuality and compute it for the three systems in question.
@article{diva2:840025,
author = {Dzhafarov, Ehtibar N. and Kujala, Janne V. and Larsson, Jan-Åke},
title = {{Contextuality in Three Types of Quantum-Mechanical Systems}},
journal = {Foundations of physics},
year = {2015},
volume = {45},
number = {7},
pages = {762--782},
}
This paper gives an introduction to some of the problems of modern camera surveillance, and how these problems are, or can be, addressed using visualization techniques. The paper is written from an engineering point of view, attempting to communicate visualization techniques invented in recent years to the non-engineer reader. Most of these techniques have the purpose of facilitating for the surveillance operator to recognize or detect relevant events (such as violence), while, in contrast, some have the purpose of hiding information in order to be less privacy-intrusive. Furthermore, there are also cameras and sensors that produce data that have no natural visible form, and methods for visualizing such data are discussed as well. Finally, in a concluding discussion an attempt is made to predict how the discussed methods and techniques will be used in the future.
@article{diva2:814463,
author = {Ahlberg, Jörgen},
title = {{Visualization Techniques for Surveillance:
Visualizing What Cannot Be Seen and Hiding What Should Not Be Seen}},
journal = {Konsthistorisk Tidskrift},
year = {2015},
volume = {84},
number = {2},
pages = {123--138},
}
Atomic force microscopy has recently been extented to bimodal operation, where increased image contrast is achieved through excitation and measurement of two cantilever eigen-modes. This enhanced material contrast is advantageous in analysis of complex heterogeneous materials with phase separation on the micro or nanometre scale. Here we show that much greater image contrast results from analysis of nonlinear response to the bimodal drive, at harmonics and mixing frequencies. The amplitude and phase of up to 17 frequencies are simultaneously measured in a single scan. Using a machine-learning algorithm we demonstrate almost threefold improvement in the ability to separate material components of a polymer blend when including this nonlinear response. Beyond the statistical analysis performed here, analysis of nonlinear response could be used to obtain quantitative material properties at high speeds and with enhanced resolution.
@article{diva2:800631,
author = {Forchheimer, Daniel and Forchheimer, Robert and Haviland, David B.},
title = {{Improving image contrast and material discrimination with nonlinear response in bimodal atomic force microscopy}},
journal = {Nature Communications},
year = {2015},
volume = {6},
number = {6270},
}
Poly(3-hexylthiophene)(P3HT) transistors with a thin ion-gel gate dielectric layer (100 nm thickness) was fabricated. The thin ion-gel dielectric layer retarded the capacitance drop at high frequencies and the diffusion of the ionic molecules in the polymer active layer that are severe drawbacks of the ion-gel dielectric transistors. Thereby, the thin ion-gel transistors showed hysteresis-free I-V characteristics, less frequency-dependence, and enhanced bias-stability. The average charge mobility was similar to 2 cm(2)/Vs and the on/off ratio was 10(4)similar to 10(5). The dependence of the capacitance and the kinetics of ion translation on the thickness of the ion-gel were discussed by both experiments and theoretical calculations.
@article{diva2:794162,
author = {Won Lee, Sung and Shin, Minkwan and Yoon Park, Jae and Soo Kim, Bong and Tu, Deyu and Jeon, Sanghun and Jeong, Unyong},
title = {{Thin Ion-Gel Dielectric Layer to Enhance the Stability of Polymer Transistors}},
journal = {Science of Advanced Materials},
year = {2015},
volume = {7},
number = {5},
pages = {874--880},
}
Photonic systems based on energy-time entanglement have been proposed to test local realism using the Bell inequality. A violation of this inequality normally also certifies security of device-independent quantum key distribution (QKD) so that an attacker cannot eavesdrop or control the system. We show how this security test can be circumvented in energy-time entangled systems when using standard avalanche photodetectors, allowing an attacker to compromise the system without leaving a trace. We reach Bell values up to 3.63 at 97.6% faked detector efficiency using tailored pulses of classical light, which exceeds even the quantum prediction. This is the first demonstration of a violation-faking source that gives both tunable violation and high faked detector efficiency. The implications are severe: the standard Clauser-Horne-Shimony-Holt inequality cannot be used to show device-independent security for energy-time entanglement setups based on Franson’s configuration. However, device-independent security can be reestablished, and we conclude by listing a number of improved tests and experimental setups that would protect against all current and future attacks of this type.
@article{diva2:788362,
author = {Jogenfors, Jonathan and Elhassan, Ashraf M and Ahrens, Johan and Bourennane, Mohamed and Larsson, Jan-Åke},
title = {{Hacking the Bell test using classical light in energy-time entanglement--based quantum key distribution}},
journal = {Science Advances},
year = {2015},
volume = {1},
number = {11},
pages = {1--7},
}
n/a
@article{diva2:778300,
author = {Fabiano, Simone and Usta, Hakan and Forchheimer, Robert and Crispin, Xavier and Facchetti, Antonio and Berggren, Magnus},
title = {{Selective Remanent Ambipolar Charge Transport in Polymeric Field-Effect Transistors For High-Performance Logic Circuits Fabricated in Ambient}},
journal = {Advanced Materials},
year = {2014},
volume = {26},
number = {44},
pages = {7438--7443},
}
The Franson interferometer, proposed in 1989 (Franson 1989 Phys. Rev. Lett. 62 2205-08), beautifully shows the counter-intuitive nature of light. The quantum description predicts sinusoidal interference for specific outcomes of the experiment, and these predictions can be verified in experiment. In the spirit of Einstein, Podolsky, and Rosen it is possible to ask if the quantum-mechanical description (of this setup) can be considered complete. This question will be answered in detail in this paper, by delineating the quite complicated relation between energy-time entanglement experiments and Einstein-Podolsky-Rosen (EPR) elements of reality. The mentioned sinusoidal interference pattern is the same as that giving a violation in the usual Bell experiment. Even so, depending on the precise requirements made on the local realist model, this can imply (a) no violation, (b) smaller violation than usual, or (c) full violation of the appropriate statistical bound. Alternatives include (a) using only the measurement outcomes as EPR elements of reality, (b) using the emission time as EPR element of reality, (c) using path realism, or (d) using a modified setup. This paper discusses the nature of these alternatives and how to choose between them. The subtleties of this discussion needs to be taken into account when designing and setting up experiments intended to test local realism. Furthermore, these considerations are also important for quantum communication, for example in Bell-inequality-based quantum cryptography, especially when aiming for device independence.
@article{diva2:769083,
author = {Jogenfors, Jonathan and Larsson, Jan-Åke},
title = {{Energy-time entanglement, elements of reality, and local realism}},
journal = {Journal of Physics A},
year = {2014},
volume = {47},
number = {42},
pages = {424032--},
}
Bell inequalities are intended to show that local realist theories cannot describe the world. A local realist theory is one where physical properties are defined prior to and independent of measurement, and no physical influence can propagate faster than the speed of light. Quantum-mechanical predictions for certain experiments violate the Bell inequality while a local realist theory cannot, and this shows that a local realist theory cannot give those quantum-mechanical predictions. However, because of unexpected circumstances or loopholes in available experiment tests, local realist theories can reproduce the data from these experiments. This paper reviews such loopholes, what effect they have on Bell inequality tests, and how to avoid them in experiment. Avoiding all these simultaneously in one experiment, usually called a loophole-free or definitive Bell test, remains an open task, but is very important for technological tasks such as device-independent security of quantum cryptography, and ultimately for our understanding of the world.
@article{diva2:769078,
author = {Larsson, Jan-Åke},
title = {{Loopholes in Bell inequality tests of local realism}},
journal = {Journal of Physics A},
year = {2014},
volume = {47},
number = {42},
pages = {424003--},
}
In a local realist model, physical properties are defined prior to and independent of measurement and no physical influence can propagate faster than the speed of light. Proper experimental violation of a Bell inequality would show that the world cannot be described with such a model. Experiments intended to demonstrate a violation usually require additional assumptions that make them vulnerable to a number of "loopholes." In both pulsed and continuously pumped photonic experiments, an experimenter needs to identify which detected photons belong to the same pair, giving rise to the coincidence-time loophole. Here, via two different methods, we derive Clauser-Horne- and Eberhard-type inequalities that are not only free of the fair-sampling assumption (thus not being vulnerable to the detection loophole), but also free of the fair-coincidence assumption (thus not being vulnerable to the coincidence-time loophole). Both approaches can be used for pulsed as well as for continuously pumped experiments. Moreover, as they can also be applied to already existing experimental data, we finally show that a recent experiment [Giustina et al., Nature (London) 497, 227 (2013)] violated local realism without requiring the fair-coincidence assumption.
@article{diva2:758549,
author = {Larsson, Jan-Åke and Giustina, Marissa and Kofler, Johannes and Wittmann, Bernhard and Ursin, Rupert and Ramelow, Sven},
title = {{Bell-inequality violation with entangled photons, free of the coincidence-time loophole}},
journal = {Physical Review A. Atomic, Molecular, and Optical Physics},
year = {2014},
volume = {90},
number = {3},
pages = {032107--},
}
Purpose – A model for streamers based on charge transport has been developed by MIT and ABB. The purpose of this paper is to investigate the consequences of changing numerical method from the finite element method (FEM) to the finite volume method (FVM) for simulations using the streamer model. The new solver is also used to extend the simulations to 3D. Design/methodology/approach – The equations from the MIT-ABB streamer model are implemented in OpenFOAM which uses the FVM. Checks of the results are performed including verification of convergence. The solver is then applied to some of the key simulations from the FEM model and results presented. Findings – The results for second mode streamers are confirmed, whereas the results for third mode streamers differ significantly leading to questioning of one hypothesis proposed based on the FEM results. The 3D simulations give consistent results and show a way forward for future simulations. Originality/value – The FVM has not been applied to the model before and led to more confidence in second mode result and revising of third mode results. In addition the new simulation method makes it possible to extend the results to 3D.
@article{diva2:755260,
author = {Lavesson, Nils and Jogenfors, Jonathan and Widlund, Ola},
title = {{Modeling of streamers in transformer oil using OpenFOAM}},
journal = {Compel},
year = {2014},
volume = {33},
number = {4},
pages = {1272--1281},
}
The interest for thermography as a method for spot weld inspection has increased during the last years since it is a full-field method suitable for automatic inspection. Thermography systems can be developed in different ways, with different physical setups, excitation sources, and image analysis algorithms. In this paper we suggest a single-sided setup of a thermography system using a flash lamp as excitation source. The analysis algorithm aims to find the spatial region in the acquired images corresponding to the successfully welded area, i.e., the nugget size. Experiments show that the system is able to detect spot welds, measure the nugget diameter, and based on the information also separate a spot weld from a stick weld. The system is capable to inspect more than four spot welds per minute, and has potential for an automatic non-destructive system for spot weld inspection. The development opportunities are significant, since the algorithm used in the initial analysis is rather simplified. Moreover, further evaluation of alternative excitation sources can potentially improve the performance.
@article{diva2:746978,
author = {Runnemalm, Anna and Ahlberg, Jörgen and Appelgren, Anders and Sjökvist, Stefan},
title = {{Automatic Inspection of Spot Welds by Thermography}},
journal = {Journal of nondestructive evaluation},
year = {2014},
volume = {33},
number = {3},
pages = {398--406},
}
We show that the phenomenon of quantum contextuality can be used to certify lower bounds on the dimension accessed by the measurement devices. To prove this, we derive bounds for different dimensions and scenarios of the simplest noncontextuality inequalities. Some of the resulting dimension witnesses work independently of the prepared quantum state. Our constructions are robust against noise and imperfections, and we show that a recent experiment can be viewed as an implementation of a state-independent quantum dimension witness.
@article{diva2:737362,
author = {Guehne, Otfried and Budroni, Costantino and Cabello, Adan and Kleinmann, Matthias and Larsson, Jan-Åke},
title = {{Bounding the quantum dimension with contextuality}},
journal = {Physical Review A. Atomic, Molecular, and Optical Physics},
year = {2014},
volume = {89},
number = {6},
}
Elastic optical networks are envisaged as promising solutions to fulfill the diverse bandwidth requirements for the emerging heterogeneous network applications. To support flexible allocation of spectrum resources the optical network nodes need to be agile. Among the different proposed solutions for elastic nodes, the one based on architecture of demand (AoD) exhibits considerable flexibility against the other alternatives. The node modules in the case of AoD are not hard-wired, but can be connected/disconnected to any input/output port according to the requirements. Thus, each AoD node and the network (fabricated with AoD nodes) as a whole acts like an optical field-programmable gate array. This flexibility inherent in AoD can be exploited for different purposes, such as for cost-efficient and energy-efficient design of the networks. This study looks into the cost-efficient network planning issue for synthetic networks implemented through AoD nodes. The problem is formalized as an integer linear programming formulation for presenting the optimal solution. Furthermore, a scalable and effective heuristic algorithm is proposed for cost-efficient design, and its performance is compared with the optimal solution. The designed networks with AoD nodes are further investigated for a dynamic scenario, and their blocking probability due to limited switching resources in the nodes is examined. To alleviate the blocking performance for the dynamic case, an efficient synthesis strategy along with a scheme for optimal placement of switching resources within the network nodes is presented. Extensive results show that 1) even at high loads, the network with AoD nodes achieves saving of switching modules up to 40% compared to the one with static reconfigurable optical add-drop multiplexers (ROADMs) through a proper network design, 2) by diminishing the spectrum selective switches the overall power consumption of the network decreases by more than 25% for high loads, and 3) for the dynamic scenario the blocking owing to the node modules constraint is alleviated significantly by slightly augmenting the switching devices and optimally deploying them within the network nodes.
@article{diva2:737184,
author = {Muhammad, Ajmal and Zervas, Georgios and Amaya, Norberto and Simeonidou, Dimitra and Forchheimer, Robert},
title = {{Introducing Flexible and Synthetic Optical Networking:
Planning and Operation Based on Network Function Programmable ROADMs}},
journal = {Journal of Optical Communications and Networking},
year = {2014},
volume = {6},
number = {7},
pages = {635--648},
}
It is known that if the dimension is a perfect square the Clifford group can be represented by monomial matrices. Another way of expressing this result is to say that when the dimension is a perfect square the standard representation of the Clifford group has a system of imprimitivity consisting of one dimensional subspaces. We generalize this result to the case of an arbitrary dimension. Let k be the square-free part of the dimension. Then we show that the standard representation of the Clifford group has a system of imprimitivity consisting of k-dimensional subspaces. To illustrate the use of this result we apply it to the calculation of SIC-POVMs (symmetric informationally complete positive operator valued measures), constructing exact solutions in dimensions 8 (hand-calculation) as well as 12 and 28 (machine-calculation).
@article{diva2:714191,
author = {Appleby, D.M. and Bengtsson, Ingemar and Brierley, Stephen and Ericsson, Åsa and Grassl, Markus and Larsson, Jan-Åke},
title = {{Systems of Imprimitivity for the Clifford Group}},
journal = {Quantum information \& computation},
year = {2014},
volume = {14},
number = {3-4},
pages = {339--360},
}
We show that for two-qubit chained Bell inequalities with an arbitrary number of measurement settings, nonlocality and entanglement are not only different properties but are inversely related. Specifically, we analytically prove that in absence of noise, robustness of nonlocality, defined as the maximum fraction of detection events that can be lost such that the remaining ones still do not admit a local model, and concurrence are inversely related for any chained Bell inequality with an arbitrary number of settings. The closer quantum states are to product states, the harder it is to reproduce quantum correlations with local models. We also show that, in presence of noise, nonlocality and entanglement are simultaneously maximized only when the noise level is equal to the maximum level tolerated by the inequality; in any other case, a more nonlocal state is always obtained by reducing the entanglement. In addition, we observed that robustness of nonlocality and concurrence are also inversely related for the Bell scenarios defined by the tight two-qubit three-setting I-3322 inequality, and the tight two-qutrit inequality I-3.
@article{diva2:710293,
author = {Vallone, Giuseppe and Lima, Gustavo and Gomez, Esteban S. and Canas, Gustavo and Larsson, Jan-Åke and Mataloni, Paolo and Cabello, Adan},
title = {{Bell scenarios in which nonlocality and entanglement are inversely related}},
journal = {Physical Review A. Atomic, Molecular, and Optical Physics},
year = {2014},
volume = {89},
number = {1},
pages = {012102--},
}
District heating is a common way of providing heat to buildings in urban areas. The heat is carried by hot water or steam and distributed in a network of pipes from a central powerplant. It is of great interest to minimize energy losses due to bad pipe insulation or leakages in such district heating networks. As the pipes generally are placed underground, it may be difficult to establish the presence and location of losses and leakages. Toward this end, this work presents methods for large-scale monitoring and detection of leakages by means of remote sensing using thermal cameras, so-called airborne thermography. The methods rely on the fact that underground losses in district heating systems lead to increased surface temperatures. The main contribution of this work is methods for automatic analysis of aerial thermal images to localize leaking district heating pipes. Results and experiences from large-scale leakage detection in several cities in Sweden and Norway are presented.
@article{diva2:705324,
author = {Friman, Ola and Follo, Peter and Ahlberg, Jörgen and Sjökvist, Stefan},
title = {{Methods for Large-Scale Monitoring of District Heating Systems Using Airborne Thermography}},
journal = {IEEE Transactions on Geoscience and Remote Sensing},
year = {2014},
volume = {52},
number = {8},
pages = {5175--5182},
}
We describe a method for eye pupil localization based on an ensemble of randomized regression trees and use several publicly available datasets for its quantitative and qualitative evaluation. The method compares well with reported state-of-the-art and runs in real-time on hardware with limited processing power, such as mobile devices.
@article{diva2:692549,
author = {Markus, Nenad and Frljak, Miroslav and Pandzic, Igor S. and Ahlberg, Jörgen and Forchheimer, Robert},
title = {{Eye pupil localization with an ensemble of randomized trees}},
journal = {Pattern Recognition},
year = {2014},
volume = {47},
number = {2},
pages = {578--587},
}
Information-theoretically secure (ITS) authentication is needed in Quantum Key Distribution (QKD). In this paper, we study security of an ITS authentication scheme proposed by Wegman& Carter, in the case of partially known authentication key. This scheme uses a new authentication key in each authentication attempt, to select a hash function from an Almost Strongly Universal2 hash function family. The partial knowledge of the attacker is measured as the trace distance between the authentication key distribution and the uniform distribution; this is the usual measure in QKD. We provide direct proofs of security of the scheme, when using partially known key, first in the information-theoretic setting and then in terms of witness indistinguishability as used in the Universal Composability (UC) framework. We find that if the authentication procedure has a failure probability ε and the authentication key has an ε´ trace distance to the uniform, then under ITS, the adversary’s success probability conditioned on an authentic message-tag pair is only bounded by ε +|Ƭ|ε´, where |Ƭ| is the size of the set of tags. Furthermore, the trace distance between the authentication key distribution and the uniform increases to |Ƭ|ε´ after having seen an authentic message-tag pair. Despite this, we are able to prove directly that the authenticated channel is indistinguishable from an (ideal) authentic channel (the desired functionality), except with probability less than ε + ε´. This proves that the scheme is (ε + ε´)-UC-secure, without using the composability theorem.
@article{diva2:616699,
author = {Abidin, Aysajan and Larsson, Jan-Åke},
title = {{Direct proof of security of Wegman-Carter authentication with partially known key}},
journal = {Quantum Information Processing},
year = {2014},
volume = {13},
number = {10},
pages = {2155--2170},
}
The intestine of Caenorhabditis elegans is derived from 20 cells that are organized into nine intestinal rings. During embryogenesis, three of the rings rotate approximately 90 degrees in a process known as intestinal twist. The underlying mechanisms for this morphological event are not fully known, but it has been demonstrated that both left-right and anterior-posterior asymmetry is required for intestinal twist to occur. We have recently presented a rule-based meta-Boolean tree model intended to describe complex lineages. In this report we apply this model to the E lineage of C. elegans, specifically targeting the asymmetric anterior-posterior division patterns within the lineage. The resulting model indicates that cells with the same factor concentration are located next to each other in the intestine regardless of lineage origin. In addition, the shift in factor concentrations coincides with the boundary for intestinal twist. When modeling lit-1 mutant data according to the same principle, the factor distributions in each cell are altered, yet the concurrence between the shift in concentration and intestinal twist remains. This pattern suggests that intestinal twist is controlled by a threshold mechanism. In the current paper we present the factor concentrations for all possible combinations of symmetric and asymmetric divisions in the E lineage and relate these to the potential threshold by studying existing data for wild-type and mutant embryos. Finally, we discuss how the resulting models can serve as a basis for experimental design in order to reveal the underlying mechanisms of intestinal twist.
@article{diva2:1080583,
author = {Pettersson, Sofia and Forchheimer, Robert and Larsson, Jan-Åke},
title = {{Meta-Boolean models of asymmetric division patterns in the \emph{C. elegans} intestinal lineage:
Implications for the posterior boundary of intestinal twist}},
journal = {Worm},
year = {2013},
volume = {2},
}
Optical networks are expected to provide a unified platform for a diverse set of emerging applications (three-dimensional TV, digital cinema, e-health, grid computing, etc). The service differentiation will be an essential feature of these networks. Considering the fact that users have different levels of patience for different network applications, referred to as set-up delay tolerance, it will be one of the key parameters for service differentiation. Service differentiation based on set-up delay tolerance will not only enable network users to select an appropriate service class (SC) in compliance with their requirements, but will also provide an opportunity to optimize the network resource provisioning by exploiting this information, resulting in an improvement in the overall performance. Improvement in network performance can be further enhanced by exploiting the connection holding-time awareness. However, when multiple classes of service with different set-up delay tolerances are competing for network resources, the connection requests belonging to SCs with higher set-up delay tolerance have better chances to grab the resources and leave less room for the others, resulting in degradation in the blocking performance of less patient customers. This study proposes different scheduling strategies for promoting the requests belonging to smaller set-up delay tolerance SCs, such as giving priority, reserving some fraction of available resources, and augmenting the research space by providing some extra paths. Extensive simulation results show that 1) priority in the rescheduling queue is not always sufficient for eradicating the degradation effect of high delay tolerant SCs on the provisioning rate of the most stringent SC, and 2) by utilizing the proposed strategies, resource efficiency and overall network blocking performance improve significantly in all SCs.
@article{diva2:688473,
author = {Muhammad, Ajmal and Cavdar, Cicek and Wosinska, Lena and Forchheimer, Robert},
title = {{Service Differentiated Provisioning in Dynamic WDM Networks Based on Set-Up Delay Tolerance}},
journal = {Journal of Optical Communications and Networking},
year = {2013},
volume = {5},
number = {11},
pages = {1250--1261},
}
This paper presents the use of polyelectrolyte-decorated amyloid fibrils as gate electrolyte in electrochromic electrochemical transistors. Conducting polymer alkoxysulfonate poly(3,4-ethylenedioxythiophene) (PEDOT-S) and luminescent conjugate polymer poly(thiophene acetic acid) (PTAA) are utilized to decorate insulin amyloid fibrils for gating lateral poly(3,4-ethylenedioxythiophene):poly(styrene sulfonate) (PEDOT:PSS) electrochemical transistors. In this comparative work, four gate electrolytes are explored, including the polyelectrolytes and their amyloid-fibril complexes. The discrimination of transistor behaviors with different gate electrolytes is understood in terms of an electrochemical mechanism. The combination of luminescent polymers, biomolecules and electrochromic transistors enables multi functions in a single device, for example, the color modulation in monochrome electrochromic display, as well as biological sensing/labeling.
@article{diva2:656887,
author = {Tu, Deyu and Nilsson, David and Forchheimer, Robert},
title = {{Electrochromic Electrochemical Transistors Gated With Polyelectrolyte-Decorated Amyloid Fibrils}},
journal = {IEEE/OSA Journal of Display Technology},
year = {2013},
volume = {9},
number = {9},
pages = {755--759},
}
In this Comment we argue that the experiment describedin the recent Letter does not allow one to make con-clusions about contextuality. Our main criticism is that themeasurement of the observables as well as the preparationof the state manifestly depend on the chosen context.Contrary to that, contextuality is about the behavior ofthesamemeasurement device in different experimentalcontexts.
@article{diva2:612434,
author = {Amselem, E. and Bourennane, M. and Budroni, C. and Cabello, A. and Guehne, O. and Kleinmann, M. and Larsson, Jan-Åke and Wiesniak, M.},
title = {{Editorial Material: Comment on "State-Independent Experimental Test of Quantum Contextuality"}},
journal = {Physical Review Letters},
year = {2013},
volume = {110},
number = {7},
pages = {1--1},
}
We present a method suitable for a time-to-impact sensor. Inspired by the seemingly "low" complexity of small insects, we propose a new approach to optical flow estimation that is the key component in time-to-impact estimation. The approach is based on measuring time instead of the apparent motion of points in the image plane. The specific properties of the motion field in the time-to-impact application are used, such as measuring only along a one-dimensional (1-D) line and using simple feature points, which are tracked from frame to frame. The method lends itself readily to be implemented in a parallel processor with an analog front-end. Such a processing concept [near-sensor image processing (NSIP)] was described for the first time in 1983. In this device, an optical sensor array and a low-level processing unit are tightly integrated into a hybrid analog-digital device. The high dynamic range, which is a key feature of NSIP, is used to extract the feature points. The output from the device consists of a few parameters, which will give the time-to-impact as well as possible transversal speed for off-centered viewing. Performance and complexity aspects of the implementation are discussed, indicating that time-to-impact data can be achieved at a rate of 10 kHz with todays technology.
@article{diva2:609764,
author = {Astrom, Anders and Forchheimer, Robert},
title = {{Low-complexity, high-speed, and high-dynamic range time-to-impact algorithm}},
journal = {Journal of Electronic Imaging (JEI)},
year = {2012},
volume = {21},
number = {4},
}
Contextuality is a natural generalization of nonlocality which does not need composite systems or spacelike separation and offers a wider spectrum of interesting phenomena. Most notably, in quantum mechanics there exist scenarios where the contextual behavior is independent of the quantum state. We show that the quest for an optimal inequality separating quantum from classical noncontextual correlations in a state-independent manner admits an exact solution, as it can be formulated as a linear program. We introduce the noncontextuality polytope as a generalization of the locality polytope and apply our method to identify two different tight optimal inequalities for the most fundamental quantum scenario with state-independent contextuality.
@article{diva2:602873,
author = {Kleinmann, Matthias and Budroni, Costantino and Larsson, Jan-Åke and Guehne, Otfried and Cabello, Adan},
title = {{Optimal Inequalities for State-Independent Contextuality}},
journal = {Physical Review Letters},
year = {2012},
volume = {109},
number = {25},
pages = {250402--},
}
Quantum systems show contextuality. More precisely, it is impossible to reproduce the quantum-mechanical predictions using a non-contextual realist model, i.e., a model where the outcome of one measurement is independent of the choice of compatible measurements performed in the measurement context. There has been several attempts to quantify the amount of contextuality for specific quantum systems, for example, in the number of rays needed in a KS proof, or the number of terms in certain inequalities, or in the violation, noise sensitivity, and other measures. This paper is about another approach: to use a simple contextual model that reproduces the quantum-mechanical contextual behaviour, but not necessarily all quantum predictions. The amount of contextuality can then be quantified in terms of additional resources needed as compared with a similar model without contextuality. In this case the contextual model needs to keep track of the context used, so the appropriate measure would be memory. Another way to view this is as a memory requirement to be able to reproduce quantum contextuality in a realist model. The model we will use can be viewed as an extension of Spekkens toy model [Phys. Rev. A 75, 032110 (2007)], and the relation is studied in some detail. To reproduce the quantum predictions for the Peres-Mermin square, the memory requirement is more than one bit in addition to the memory used for the individual outcomes in the corresponding noncontextual model.
@article{diva2:601002,
author = {Larsson, Jan-Åke},
title = {{A contextual extension of Spekkens' toy model}},
journal = {AIP Conference Proceedings},
year = {2012},
volume = {1424},
pages = {211--220},
}
We present approximations of the LLR distribution for a class of fixed-complexity soft-output MIMO detectors, such as the optimal soft detector and the soft-output via partial marginalization detector. More specifically, in a MIMO AWGN setting, we approximate the LLR distribution conditioned on the transmitted signal and the channel matrix with a Gaussian mixture model (GMM). Our main results consist of an analytical expression of the GMM model (including the number of modes and their corresponding parameters) and a proof that, in the limit of high SNR, this LLR distribution converges in probability towards a unique Gaussian distribution.
@article{diva2:587456,
author = {Cirkic, Mirsad and Persson, Daniel and Larsson, Jan-Åke and Larsson, Erik G.},
title = {{Approximating the LLR Distribution for a Class of Soft-Output MIMO Detectors}},
journal = {IEEE Transactions on Signal Processing},
year = {2012},
volume = {60},
number = {12},
pages = {6421--6434},
}
When water droplets impact each other while traveling on a superhydrophobic surface, we demonstrate that they are able to rebound like billiard balls. We present elementary Boolean logic operations and a flip-flop memory based on these rebounding water droplet collisions. Furthermore, bouncing or coalescence can be easily controlled by process parameters. Thus by the controlled coalescence of reactive droplets, here using the quenching of fluorescent metal nanoclusters as a model reaction, we also demonstrate an elementary operation for programmable chemistry.
@article{diva2:574982,
author = {Mertaniemi, Henrikki and Forchheimer, Robert and Ikkala, Olli and Ras, Robin H A},
title = {{Rebounding Droplet-Droplet Collisions on Superhydrophobic Surfaces: from the Phenomenon to Droplet Logic}},
journal = {Advanced Materials},
year = {2012},
volume = {24},
number = {42},
pages = {5738--5743},
}
Precise control over processing, transport and delivery of ionic and molecular signals is of great importance in numerous fields of life sciences. Integrated circuits based on ion transistors would be one approach to route and dispense complex chemical signal patterns to achieve such control. To date several types of ion transistors have been reported; however, only individual devices have so far been presented and most of them are not functional at physiological salt concentrations. Here we report integrated chemical logic gates based on ion bipolar junction transistors. Inverters and NAND gates of both npn type and complementary type are demonstrated. We find that complementary ion gates have higher gain and lower power consumption, as compared with the single transistor-type gates, which imitates the advantages of complementary logics found in conventional electronics. Ion inverters and NAND gates lay the groundwork for further development of solid-state chemical delivery circuits.
@article{diva2:536033,
author = {Tybrandt, Klas and Forchheimer, Robert and Berggren, Magnus},
title = {{Logic gates based on ion transistors}},
journal = {Nature Communications},
year = {2012},
volume = {3},
number = {871},
}
We show that the Clifford group-the normaliser of the Weyl-Heisenberg group-can be represented by monomial phase-permutation matrices if and only if the dimension is a square number. This simplifies expressions for SIC vectors, and has other applications to SICs and to Mutually Unbiased Bases. Exact solutions for SICs in dimension 16 are presented for the first time.
@article{diva2:534159,
author = {Appleby, D. M. and Bengtsson, Ingemar and Brierley, Stephen and Grassl, Markus and Gross, David and Larsson, Jan-Åke},
title = {{The monomial representations of the Clifford group}},
journal = {Quantum information \& computation},
year = {2012},
volume = {12},
number = {5-6},
pages = {404--431},
}
We propose an RLC model for PEDOT:PSS electrochemical transistors to interpret the persistent oscillating currents observed in experiments. The electrochemical reaction is represented by an inductor in the equivalent circuit. The simulation results show that an electrochemical device can be operated as normal transistors or oscillators under different voltage bias. This model predicts that analog circuit functions can be realized with "inductor-like" electrochemical devices.
@article{diva2:515135,
author = {Tu, Deyu and Forchheimer, Robert},
title = {{Self-oscillation in electrochemical transistors: An RLC modeling approach}},
journal = {Solid-State Electronics},
year = {2012},
volume = {69},
pages = {7--10},
}
Entanglement and its consequences—in particular the violation of Bell inequalities, which defies our concepts of realism and locality—have been proven to play key roles in Nature by many experiments for various quantum systems. Entanglement can also be found in systems not consisting of ordinary matter and light, i.e. in massive meson–antimeson systems. Bell inequalities have been discussed for these systems, but up to date no direct experimental test to conclusively exclude local realism was found. This mainly stems from the fact that one only has access to a restricted class of observables and that these systems are also decaying. In this Letter we put forward a Bell inequality for unstable systems which can be tested at accelerator facilities with current technology. Herewith, the long awaited proof that such systems at different energy scales can reveal the sophisticated “dynamical” nonlocal feature of Nature in a direct experiment gets feasible. Moreover, the role of entanglement and violation, an asymmetry between matter and antimatter, is explored, a special feature offered only by these meson–antimeson systems
@article{diva2:512811,
author = {Hiesmayr, Beatrix C and Di Domenico, Antonio and Curceanu, Catalina and Gabriel, Andreas and Huber, Marcus and Larsson, Jan-Åke and Moskal, Pawel},
title = {{Revealing Bells nonlocality for unstable systems in high energy physics}},
journal = {European Physical Journal C},
year = {2012},
volume = {72},
number = {1},
}
The Hardy test of nonlocality can be seen as a particular case of the Bell tests based on the Clauser-Horne (CH) inequality. Here we stress this connection when we analyze the relation between the CH-inequality violation, its threshold detection efficiency, and the measurement settings adopted in the test. It is well known that the threshold efficiencies decrease when one considers partially entangled states and that the use of these states, unfortunately, generates a reduction in the CH violation. Nevertheless, these quantities are both dependent on the measurement settings considered, and in this paper we show that there are measurement bases which allow for an optimal situation in this trade-off relation. These bases are given as a generalization of the Hardy measurement bases, and they will be relevant for future Bell tests relying on pairs of entangled qubits.
@article{diva2:496586,
author = {Lima, G and Inostroza, E B and Vianna, R O and Larsson, Jan-Åke and Saavedra, C},
title = {{Optimal measurement bases for Bell tests based on the Clauser-Horne inequality}},
journal = {Physical Review A. Atomic, Molecular, and Optical Physics},
year = {2012},
volume = {85},
number = {1},
pages = {012105--},
}
The ePUMA architecture is a novel parallel archi- tecture being developed as a platform for low-power computing, typically for embedded or hand-held devices. It was originally designed for radio baseband processors for hand-held devices and for radio base stations. It has also been adapted for executing high definition video CODECs. In this paper, we investigate the possibilities and limitations of the platform for real-time graphics, with focus on hand-held gaming.
@article{diva2:437260,
author = {Ragnemalm, Ingemar and Liu, Dake},
title = {{Adapting the ePUMA Architecture for Hand-held Video Games}},
journal = {International Journal of Computer Information Systems and Industrial Management Applications},
year = {2012},
volume = {4},
pages = {153--160},
}
The simulation of quantum effects requires certain classical resources, and quantifying them is an important step to characterize the difference between quantum and classical physics. For a simulation of the phenomenon of state-independent quantum contextuality, we show that the minimum amount of memory used by the simulation is the critical resource. We derive optimal simulation strategies for important cases and prove that reproducing the results of sequential measurements on a two-qubit system requires more memory than the information-carrying capacity of the system.
@article{diva2:471873,
author = {Kleinmann, Matthias and Guehne, Otfried and Portillo, Jose R. and Larsson, Jan-Åke and Cabello, Adan},
title = {{Memory cost of quantum contextuality}},
journal = {New Journal of Physics},
year = {2011},
volume = {13},
number = {113011},
}
We present a dc model to simulate the static performance of electrolyte-gated organic field-effect transistors. The channel current is expressed as charge drift transport under electric field. The charges accumulated in the channel are considered being contributed fromvoltage-dependent electric-doublelayer capacitance. The voltage-dependent contact effect and short-channel effect are also taken into account in this model. A straightforward and efficient methodology is presented to extract the model parameters. The versatility of this model is discussed as well. The model is verified by the good agreement between simulation and experimental data.
@article{diva2:450533,
author = {Tu, Deyu and Herlogsson, Lars and Kergoat, Loig and Crispin, Xavier and Berggren, Magnus},
title = {{A Static Model for Electrolyte-Gated Organic Field-Effect Transistors}},
journal = {IEEE Transactions on Electron Devices},
year = {2011},
volume = {58},
number = {10},
pages = {3574--3582},
}
In this paper we present a solution for the three dimensional representation of mobile computer games which includes both motion parallax and an autostereoscopic display. The system was built on hardware which is available on the consumer market: an iPhone 3G with a Wazabee 3Dee Shell, which is an autostereoscopic extension for the iPhone. The motion sensor of the phone was used for the implementation of the motion parallax effect as well as for a tilt compensation for the autostereoscopic display. This system was evaluated in a limited user study on mobile 3D displays. Despite some obstacles that needed to be overcome and a few remaining shortcomings of the final system, an overall acceptable 3D experience could be reached. That leads to the conclusion that portable systems for the consumer market which include 3D displays are within reach.
@article{diva2:438151,
author = {Ogniewski, Jens and Ragnemalm, Ingemar},
title = {{Autostereoscopy and Motion Parallax for Mobile Computer Games Using Commercially Available Hardware}},
journal = {International Journal of Computer Information Systems and Industrial Management Applications},
year = {2011},
volume = {3},
pages = {480--488},
}
Let S be a denumerable state space and let P be a transition probability matrix on S. If a denumerable set M of nonnegative matrices is such that the sum of the matrices is equal to P, then we call M a partition of P. Let K denote the set of probability vectors on S. With every partition M of P we can associate a transition probability function PM on K defined in such a way that if p is an element of K and M is an element of M are such that parallel to pM parallel to andgt; 0, then, with probability parallel to pM parallel to, the vector p is transferred to the vector PM/parallel to pM parallel to. Here parallel to . parallel to denotes the l(1)-norm. In this paper we investigate the convergence in distribution for Markov chains generated by transition probability functions induced by partitions of transition probability matrices. The main motivation for this investigation is the application of the convergence results obtained to filtering processes of partially observed Markov chains with denumerable state space.
@article{diva2:405233,
author = {Kaijser, Thomas},
title = {{On Markov Chains Induced by Partitioned Transition Probability Matrices}},
journal = {ACTA MATHEMATICA SINICA-ENGLISH SERIES},
year = {2011},
volume = {27},
number = {3},
pages = {441--476},
}
Klyachko and coworkers consider an orthogonality graph in the form of a pentagram, and in this way derive a Kochen-Specker inequality for spin 1 systems. In some low-dimensional situations Hilbert spaces are naturally organised, by a magical choice of basis, into SO(N) orbits. Combining these ideas some very elegant results emerge. We give a careful discussion of the pentagram operator, and then show how the pentagram underlies a number of other quantum "paradoxes", such as that of Hardy.
@article{diva2:403121,
author = {Badziag, Piotr and Bengtsson, Ingemar and Cabello, Adan and Granstrom, Helena and Larsson, Jan-Åke},
title = {{Pentagrams and Paradoxes}},
journal = {FOUNDATIONS OF PHYSICS},
year = {2011},
volume = {41},
number = {3},
pages = {414--423},
}
A cell lineage is the ancestral relationship between a group of cells that originate from a single founder cell. For example, in the embryo of the nematode Caenorhabditis elegans an invariant cell lineage has been traced, and with this information at hand it is possible to theoretically model the emergence of different cell types in the lineage, starting from the single fertilized egg. In this report we outline a modelling technique for cell lineage trees, which can be used for the C. elegans embryonic cell lineage but also extended to other lineages. The model takes into account both cell-intrinsic (transcription factor-based) and -extrinsic (extracellular) factors as well as synergies within and between these two types of factors. The model can faithfully recapitulate the entire C. elegans cell lineage, but is also general, i.e., it can be applied to describe any cell lineage. We show that synergy between factors, as well as the use of extrinsic factors, drastically reduce the number of regulatory factors needed for recapitulating the lineage. The model gives indications regarding co-variation of factors, number of involved genes and where in the cell lineage tree that asymmetry might be controlled by external influence. Furthermore, the model is able to emulate other (Boolean, discrete and differential-equation-based) models. As an example, we show that the model can be translated to the language of a previous linear sigmoid-limited concentration-based model (Geard and Wiles, 2005). This means that this latter model also can exhibit synergy effects, and also that the cumbersome iterative technique for parameter estimation previously used is no longer needed. In conclusion, the proposed model is general and simple to use, can be mapped onto other models to extend and simplify their use, and can also be used to indicate where synergy and external influence would reduce the complexity of the regulatory process.
@article{diva2:379166,
author = {Larsson, Jan-Åke and Wadströmer, Niclas and Hermanson, Ola and Lendahl, Urban and Forchheimer, Robert},
title = {{Modelling cell lineage using a meta-Boolean tree model with a relation to gene regulatory networks}},
journal = {Journal of Theoretical Biology},
year = {2011},
volume = {268},
number = {1},
pages = {62--76},
}
The Kochen-Specker theorem states that noncontextual hidden variable models are inconsistent with the quantum predictions for every yes-no question on a qutrit, corresponding to every projector in three dimensions. It has been suggested [D.A. Meyer, Phys. Rev. Lett. 83 (1999) 3751] that the inconsistency would disappear when restricting to projectors on unit vectors with rational components; that noncontextual hidden variables could reproduce the quantum predictions for rational vectors. Here we show that a qutrit state with rational components violates an inequality valid for noncontextual hidden-variable models [A.A. Klyachko et al., Phys. Rev. Lett. 101 (2008) 020403] using rational projectors. This shows that the inconsistency remains even when using only rational vectors.
@article{diva2:384998,
author = {Cabello, Adan and Larsson, Jan-Åke},
title = {{Quantum contextuality for rational vectors}},
journal = {Physics Letters A},
year = {2010},
volume = {375},
number = {2},
pages = {99--99},
}
n/a
@article{diva2:369813,
author = {Löfvenberg, Jacob and Larsson, Jan-Åke},
title = {{Comments on "New Results on Frame-Proof Codes and Traceability Schemes"}},
journal = {IEEE Transactions on Information Theory},
year = {2010},
volume = {56},
number = {11},
pages = {5888--5889},
}
Recent years have seen advances in the estimation of full 6 degree-of-freedom object pose from a single 2D image. These advances have often been presented as a result of, or together with, a new local image feature type. This paper examines how the pose accuracy and recognition robustness for such a system varies with choice of feature type. This is done by evaluating a full 6 degree-of-freedom pose estimation system for 17 different combinations of local descriptors and detectors. The evaluation is done on data sets with photos of challenging 3D objects with simple and complex backgrounds and varying illumination conditions. We examine the performance of the system under varying levels of object occlusion and we find that many features allow considerable object occlusion. From the experiments we can conclude that duplet features, that use pairs of interest points, improve pose estimation accuracy, compared to single point features. Interestingly, we can also show that many features previously used for recognition and wide-baseline stereo are unsuitable for pose estimation, one notable example are the affine covariant features that have been proven quite successful in other applications. The data sets and their ground truths are available on the web to allow future comparison with novel algorithms.
@article{diva2:325003,
author = {Viksten, Fredrik and Forss\'{e}n, Per-Erik and Johansson, Björn and Moe, Anders},
title = {{Local Image Descriptors for Full 6 Degree-of-Freedom Object Pose Estimation and Recognition}},
journal = {},
year = {2010},
}
A basic assumption behind the inequalities used for testing noncontextual hidden variable models is that the observables measured on the same individual system are perfectly compatible. However, compatibility is not perfect in actual experiments using sequential measurements. We discuss the resulting "compatibility loophole" and present several methods to rule out certain hidden variable models that obey a kind of extended noncontextuality. Finally, we present a detailed analysis of experimental imperfections in a recent trapped-ion experiment and apply our analysis to that case.
@article{diva2:304605,
author = {Guehne, Otfried and Kleinmann, Matthias and Cabello, Adan and Larsson, Jan-Åke and Kirchmair, Gerhard and Zaehringer, Florian and Gerritsma, Rene and Roos, Christian F},
title = {{Compatibility and noncontextuality for sequential measurements}},
journal = {PHYSICAL REVIEW A},
year = {2010},
volume = {81},
number = {2},
pages = {022121--},
}
In this paper, we review and comment on "A novel protocol-authentication algorithm ruling out a man-in-the-middle attack in quantum cryptography" [M. Peev et al., Int. J. Quant. Inf. 3 (2005) 225]. In particular, we point out that the proposed primitive is not secure when used in a generic protocol, and needs additional authenticating properties of the surrounding quantum-cryptographic protocol.
@article{diva2:234516,
author = {Abidin, Aysajan and Larsson, Jan-Åke},
title = {{Vulnerability of "A Novel Protocol-Authentication Algorithm Ruling out a Man-in-the-Middle Attack in Quantum Cryptography"}},
journal = {International Journal of Quantum Information},
year = {2009},
volume = {7},
number = {5},
pages = {1047--1052},
}
The chained Bell inequalities of Braunstein and Caves involving N settings per observer have some interesting applications. Here we obtain the minimum detection efficiency required for a loophole-free violation of the Braunstein-Caves inequalities for any N greater than= 2. We discuss both the case in which both particles are detected with the same efficiency and the case in which the particles are detected with different efficiencies.
@article{diva2:233672,
author = {Cabello, Adan and Larsson, Jan-Åke and Rodriguez, David},
title = {{Minimum detection efficiency required for a loophole-free violation of the Braunstein-Caves chained Bell inequalities}},
journal = {Physical Review A. Atomic, Molecular, and Optical Physics},
year = {2009},
volume = {79},
number = {6},
pages = {062109-1--062109-7},
}
Unconditionally secure message authentication is an important part of Quantum Cryptography (QC). We analyze security effects of using a key obtained from QC for authentication purposes in later rounds of QC. In particular, the eavesdropper gains partial knowledge on the key in QC that may have an effect on the security of the authentication in the later round. Our initial analysis indicates that this partial knowledge has little effect on the authentication part of the system, in agreement with previous results on the issue. However, when taking the full QC protocol into account, the picture is different. By accessing the quantum channel used in QC, the attacker can change the message to be authenticated. This together with partial knowledge of the key does incur a security weakness of the authentication. The underlying reason for this is that the authentication used, which is insensitive to such message changes when the key is unknown, becomes sensitive when used with a partially known key. We suggest a simple solution to this problem, and stress usage of this or an equivalent extra security measure in QC.
@article{diva2:260369,
author = {Cederlöf, Jörgen and Larsson, Jan-Åke},
title = {{Security aspects of the Authentication used in Quantum Cryptography}},
journal = {IEEE Transactions on Information Theory},
year = {2008},
volume = {54},
number = {4},
pages = {1735--1741},
}
Publication type Article, book review not implemented yet.New strategies to improve neuron coupling to neuroelectronic implants are needed. In particular, tomaintain functional coupling between implant and neurons, foreign body response like encapsulation must meminimized. Apart from modifying materials to mitigate encapsulation it has been shown that with extremely thinstructures, encapsulation will be less pronounced. We here utilize wire electrochemical transistors (WECTs) usingconducting polymer coated fibers. Monofilaments down to 10 μm can be successfully coated and weaved intocomplex networks with built in logic functions, so called textile logic. Such systems can control signal patterns at alarge number of electrode terminals from a few addressing fibres. Not only is fibre size in the range where lessencapsulation is expected but textiles are known to make successful implants because of their soft and flexiblemechanical properties. Further, textile fabrication provides versatility and even three dimensional networks arepossible. Three possible architectures for neuroelectronic systems are discussed. WECTs are sensitive to dehydrationand materials for better durability or improved encapsulation is needed for stable performance in biologicalenvironments.
@article{diva2:236150,
author = {Asplund, Maria and Hamedi, Mahiar and Forchheimer, Robert and Inganäs, Olle and Holst, Hans von},
title = {{Construction of wire electrodesand 3D woven logicas a potential technology forneuroprosthetic implants}},
journal = {IEEE Transactions on Biomedical Engineering},
year = {2008},
}
We report on a search for mutually unbiased bases (MUBs) in six dimensions. We find only triplets of MUBs, and thus do not come close to the theoretical upper bound 7. However, we point out that the natural habitat for sets of MUBs is the set of all complex Hadamard matrices of the given order, and we introduce a natural notion of distance between bases in Hilbert space. This allows us to draw a detailed map of where in the landscape the MUB triplets are situated. We use available tools, such as the theory of the discrete Fourier transform, to organize our results. Finally, we present some evidence for the conjecture that there exists a four dimensional family of complex Hadamard matrices of order 6. If this conjecture is true the landscape in which one may search for MUBs is much larger than previously thought.
@article{diva2:260516,
author = {Bengtsson, Ingemar and Bruzda, Wojciech and Ericsson, Åsa and Larsson, Jan-Åke and Tadej, Wojciech and Zyczkowski, Karol},
title = {{Mutually unbiased bases and Hadamard matrices of order six}},
journal = {Journal of Mathematical Physics},
year = {2007},
volume = {48},
number = {5},
pages = {052106-1--052106-21},
}
De Raedt et al. [Eur. Phys. J. B 53, 139 (2006)] have claimed to provide a local realist model for correlations of the singlet state in the familiar Einstein-Podolsky-Rosen-Bohm (EPRB) experiment when time-coincidence is used to decide which detection events should count in the analysis, and furthermore that this suggests that it is possible to construct local realistic models that can reproduce the quantum mechanical expectation values. In this letter we show that these conclusions cannot be upheld since their model exploits the so-called coincidence-time loophole. When this is properly taken into account no startling conclusions can be drawn about local realist modelling of quantum mechanics.
@article{diva2:259657,
author = {Seevinck, Michael P. and Larsson, Jan-Åke},
title = {{Comment on "A local realist model for correlations of the singlet state" by K. De Raedt, K. Keimpema, H. De Raedt, K. Michielsen and S. Miyashita}},
journal = {The European Physical Journal B},
year = {2007},
volume = {58},
number = {1},
pages = {51--53},
}
In Bell experiments, one problem is to achieve high enough photodetection to ensure that there is no possibility of describing the results via a local hidden-variable model. Using the Clauser-Horne inequality and a two-photon nonmaximally entangled state, a photodetection efficiency higher than 0.67 is necessary. Here we discuss atom-photon Bell experiments. We show that, assuming perfect detection efficiency of the atom, it is possible to perform a loophole-free atom-photon Bell experiment whenever the photodetection efficiency exceeds 0.50.
@article{diva2:259433,
author = {Cabello, Adan and Larsson, Jan-Åke},
title = {{Minimum Detection Efficiency for a Loophole-Free Atom-Photon Bell Experiment}},
journal = {Physical Review Letters},
year = {2007},
volume = {98},
pages = {220402-1--220402-4},
}
The use of organic polymers for electronic functions is mainly motivated by the low-end applications, where low cost rather than advanced performance is a driving force. Materials and processing methods must allow for cheap production. Printing of electronics using inkjets1 or classical printing methods has considerable potential to deliver this. Another technology that has been around for millennia is weaving using fibres. Integration of electronic functions within fabrics, with production methods fully compatible with textiles, is therefore of current interest, to enhance performance and extend functions of textiles2. Standard polymer field-effect transistors require well defined insulator thickness and high voltage3, so they have limited suitability for electronic textiles. Here we report a novel approach through the construction of wire electrochemical transistor (WECT) devices, and show that textile monofilaments with 10–100 µm diameters can be coated with continuous thin films of the conducting polythiophene poly(3,4-ethylenedioxythiophene), and used to create micro-scale WECTs on single fibres. We also demonstrate inverters and multiplexers for digital logic. This opens an avenue for three-dimensional polymer micro-electronics, where large-scale circuits can be designed and integrated directly into the three-dimensional structure of woven fibres.
@article{diva2:211081,
author = {Hamedi, Mahiar and Forchheimer, Robert and Inganäs, Olle},
title = {{Towards woven logic from organic electronic fibres}},
journal = {Nature Materials},
year = {2007},
volume = {6},
pages = {357--362},
}
Three binary fingerprinting code classes with properties similar to codes with the identifiable parent property are proposed. In order to compare such codes a new combinatorial quality measure is introduced. In the case of two cooperating pirates the measure is derived for the proposed codes, upper and lower bounds are constructed and the results of computer searches for good codes in the sense of the quality measure are presented. Some properties of the quality measure are also derived.
@article{diva2:249536,
author = {Löfvenberg, Jacob},
title = {{Binary Fingerprinting Codes}},
journal = {Designs, Codes and Cryptography},
year = {2005},
volume = {36},
number = {1},
pages = {69--81},
}
In this paper, we address the 3D tracking of pose and animation of the human face in monocular image sequences using deformable 3D models. The main contributions of this paper are as follows. First, we show how the robustness and stability of the Active Appearance Algorithm can be improved through the inclusion of a simple motion compensation based on feature correspondence. Second, we develop a new method able to adapt a deformable 3D model to a face in the input image. Central to this method is the decoupling of global head movements and local non-rigid deformations/animations. This decoupling is achieved by, first, estimating the global (rigid) motion using robust statistics and a statistical model for face texture, and then, adapting the 3D model to possible local animations using the concept of the Active Appearance Algorithm. This proposed method constitutes a significant step towards reliable model-based face trackers since the strengths of complementary tracking methodologies are combined.
Experiments evaluating the effectiveness of the methods are reported. Adaptation and tracking examples demonstrate the feasibility and robustness of the developed methods.
@article{diva2:690420,
author = {Dornaika, Fadi and Ahlberg, Jörgen},
title = {{Face and facial feature tracking using deformable models}},
journal = {International Journal of Image and Graphics},
year = {2004},
volume = {4},
number = {3},
pages = {499--532},
}
The bits of the binary expansion of position measurement results were used to derive Bell inequalities for position measurements. The output state of the nondegenerate optical parametric amplifier (NOPA) was used to obtain violations of these inequalities. It was shown that the position operator itself, together with other suitable operators, also can be used to violate the Bell inequality, deriving a Bell inequality more suited to the original Einstein-Podolsky-Rosen (EPR) setting. It was concluded that the NOPA state cannot be described by a local realist model, despite having a strictly positive Wigner function.
@article{diva2:266564,
author = {Larsson, Jan-Åke},
title = {{Bell inequalities for position measurements}},
journal = {Physical Review A},
year = {2004},
volume = {70},
number = {2},
pages = {022102-1--022102-5},
}
The use of entanglement by quantum-cryptographic protocol to transfer the data was discussed. The detection of individual eavesdropping attack on each qubit was detected by the security test where the qubits provides the key, and there exists a coherent attack internal to these groups, which goes unnoticed in security tests. The result shows that the level of the individual qubits also detect the coherent attack by testing equality for the measurements. A modified test was proposed to ensure security against a coherent attack.
@article{diva2:243445,
author = {Larsson, Jan-Åke},
title = {{No information flow using statistical fluctuations and quantum cryptography}},
journal = {Physical Review A: covering atomic, molecular, and optical physics and quantum information},
year = {2004},
volume = {69},
number = {4},
pages = {042317-1--042317-8},
}
This paper analyzes effects of time dependence in the Bell inequality. A generalized inequality is derived for the case when coincidence and non-coincidence (and hence whether or not a pair contributes to the actual data) is controlled by timing that depends on the detector settings. Needless to say, this inequality is violated by quantum mechanics and could be violated by experimental data provided that the loss of measurement pairs through failure of coincidence is small enough, but the quantitative bound is more restrictive in this case than in the previously analyzed "efficiency loophole".
@article{diva2:242345,
author = {Larsson, Jan-Åke and Gill, Richard D},
title = {{Bell's inequality and the coincidence-time loophole}},
journal = {Europhysics letters},
year = {2004},
volume = {67},
number = {5},
pages = {707--713},
}
@article{diva2:269370,
author = {Accardi, Luigi and Belavkin, V. P. and Kent, Johyn T. and Brody, Dorje C. and Bingham, N. H. and Frey, Jeremy G. and Helland, Inge S. and Larsson, Jan-Åke and Majumdar, N. K. and Minozzo, Marco and Thompson, J. W.},
title = {{Discussion on ``On quantum statistical inference'' by O. E. Barndorff-Nielsen, R. D. Gill and P.E. Jupp}},
journal = {Journal of The Royal Statistical Society Series B-statistical Methodology},
year = {2003},
volume = {65},
number = {4},
pages = {805--816},
}
We present a face and facial feature tracking system able to extract animation parameters describing the motion and articulation of a human face in real-time on consumer hardware. The system is based on a statistical model of face appearance and a search algorithm for adapting the model to an image. Speed and robustness is discussed, and the system evaluated in terms of accuracy.
@article{diva2:267448,
author = {Ahlberg, Jörgen and Forchheimer, Robert},
title = {{Face tracking for model-based coding and face animation}},
journal = {International journal of imaging systems and technology (Print)},
year = {2003},
volume = {13},
number = {1},
pages = {8--22},
}
We present an automatization of Barnsley's manual algorithm for the solution of the inverse problem of iterated function systems (IFSs). The problem is to retrieve the number of mappings and the parameters of an IFS from a digital binary image approximating the attractor induced by the IFS. Barnsley et al. described a way to manually solve the inverse problem by identifying the fragments, of which the collage is composed, and then computing the parameters of the mappings. The automatic algorithm searches through a finite set of points in the parameter space determining a set of affine mappings. The algorithm uses the collage theorem and the Hausdorff metric. The inverse problem of IFSs is related to image coding of binary images. If the number of mappings and the parameters of an IFS, with not too many mappings, could be obtained from a binary image, then this would give an efficient representation of the image. It is shown that the inverse problem solved by the automatic algorithm has a solution and some experiments show that the automatic algorithm is able to retrieve an IFS, including the number of mappings, from a digital binary image approximating the attractor induced by the IFS.
@article{diva2:267338,
author = {Wadstromer, N.},
title = {{An automatization of Barnsley's algorithm for the inverse problem of iterated function systems}},
journal = {IEEE Transactions on Image Processing},
year = {2003},
volume = {12},
number = {11},
pages = {1388--1397},
}
Bell inequalities for number measurements are derived via the observation that the bits of the number indexing a number state are proper qubits. Violations of these inequalities are obtained from the output state of the nondegenerate optical parametric amplifier.
@article{diva2:243423,
author = {Larsson, Jan-Åke},
title = {{Qubits from number states and Bell inequalities for number measurements}},
journal = {Physical Review A. Atomic, Molecular, and Optical Physics},
year = {2003},
volume = {67},
pages = {022108-1--022108-8},
}
We present a system for finding and tracking a face and extract global and local animation parameters from a video sequence. The system uses an initial colour processing step for finding a rough estimate of the position, size, and inplane rotation of the face, followed by a refinement step drived by an active model. The latter step refines the previous estimate, and also extracts local animation parameters. The system is able to track the face and some facial features in near real-time, and can compress the result to a bitstream compliant to MPEG-4 face and body animation.
@article{diva2:267888,
author = {Ahlberg, Jörgen},
title = {{An active model for facial feature tracking}},
journal = {EURASTP journal an applied signal processing},
year = {2002},
volume = {2002},
number = {6},
pages = {566--571},
}
Traceability codes are identifiable parent property (IPP) codes with the additional requirement that Hamming distance can be used to trace a parent of a word. Traceability codes can be used for constructing digital fingerprints in order to deter users from illegally copying digital data. We construct a class of traceability codes and determine the exact parameters of some of the codes in this class.
@article{diva2:267857,
author = {Lindkvist, Tina and Löfvenberg, Jacob and Svanström, Mattias},
title = {{A class of traceability codes}},
journal = {IEEE Transactions on Information Theory},
year = {2002},
volume = {48},
number = {7},
pages = {2094--2096},
}
By probabilistic means, the concept of contextuality is extended so that it can be used in non-ideal situations. An inequality is presented, which at least in principle enables a test to discard non-contextual hidden-variable models at low error rates, in the spirit of the Kochen-Specker theorem. Assuming that the errors are independent, an explicit error bound of 1.42% is derived, below which a Kochen-Specker contradiction occurs.
@article{diva2:267815,
author = {Larsson, Jan-Åke},
title = {{A Kochen-Specker inequality}},
journal = {Europhysics letters},
year = {2002},
volume = {58},
number = {6},
pages = {799--805},
}
Quantum Cryptography, or more accurately, Quantum Key Distribution (QKD) is based on using an unconditionally secure "quantum channel" to share a secret key among two users. A manufacturer of QKD devices could, intentionally or not, use a (semi-) classical channel instead of the quantum channel, which would remove the supposedly unconditional security. One example is the BB84 protocol, where the quantum channel can be implemented in polarization of single photons. Here, use of several photons instead of one to encode each bit of the key provides a similar but insecure system. For protocols based on violation of a Bell inequality (e.g., the Ekert protocol) the situation is somewhat different. While the possibility is mentioned by some authors, it is generally thought that an implementation of a (semi-) classical channel will differ significantly from that of a quantum channel. Here, a counterexample will be given using an identical physical setup as is used in photon-polarization Ekert QKD. Since the physical implementation is identical, a manufacturer may include this modification as a Trojan Horse in manufactured systems, to be activated at will by an eavesdropper. Thus, the old truth of cryptography still holds: you have to trust the manufacturer of your cryptographic device. Even when you do violate the Bell inequality.
@article{diva2:259439,
author = {Larsson, Jan-Åke},
title = {{A practical Trojan Horse for Bell-inequality-based quantum cryptography}},
journal = {Quantum information \& computation},
year = {2002},
volume = {2},
number = {6},
pages = {434--442},
}
A Reply to the Comment by Carlos Luiz Ryff.
@article{diva2:270248,
author = {Aerts, Sven and Kwiat, Paul and Larsson, Jan-Åke and Zukowski, Marek},
title = {{Comment on \emph{Two-photon Franson-type experiment and local realism} - Reply}},
journal = {Physical Review Letters},
year = {2001},
volume = {86},
number = {9},
pages = {1909--1909},
}
An analysis of detector-efficiency in many-site Clauser-Horne inequalities is presented for the case of perfect visibility. It is shown that there is a violation of the presented n-site Clauser-Horne inequalities if and only if the efficiency is greater than n/(2n−1). Thus, for a two-site two-setting experiment there are no quantum-mechanical predictions that violate local realism unless the efficiency is greater than . Second, there are n-site experiments for which the quantum-mechanical predictions violate local realism whenever the efficiency exceeds
.
@article{diva2:259445,
author = {Larsson, Jan-Åke and Semitecolos, Jason},
title = {{Strict detector-efficiency bounds for n-site Clauser-Horne inequalities}},
journal = {Physical Review A. Atomic, Molecular, and Optical Physics},
year = {2001},
volume = {63},
pages = {022117-1--022117-5},
}
It is well-known in the physics community that the Copenhagen interpretation of quantum mechanics is very different from the Bohm interpretation. Usually, a local realistic model is thought to be even further from these two, as in its purest form it cannot even yield the probabilities from quantum mechanics by the Bell theorem. Nevertheless, by utilizing the “efficiency loophole” such a model can mimic the quantum probabilities, and more importantly, in this paper it is shown that it is possible to interpret this latter kind of local realistic model such that it contains elements of reality as found in the Bohm interpretation, while retaining the complementarity present in the Copenhagen interpretation.
@article{diva2:259456,
author = {Larsson, Jan-Åke},
title = {{A possible unification of the Copenhagen and the Bohm interpretations using local realism}},
journal = {Foundations of physics letters},
year = {2000},
volume = {13},
number = {5},
pages = {477--486},
}
In model-based, or semantic, coding, parameters describing the nonrigid motion of objects, e.g., the mimics of a face, are of crucial interest. The facial animation parameters (FAPs) specified in MPEG-4 compose a very rich set of such parameters, allowing a wide range of facial motion. However, the FAPs are typically correlated and also constrained in their motion due to the physiology of the human face. We seek here to utilize this spatial correlation to achieve efficient compression. As it does not introduce any interframe delay, the method is suitable for interactive applications, e.g., videophone and interactive video, where low delay is a vital issue.
@article{diva2:1261243,
author = {Ahlberg, Jörgen and Li, Haibo},
title = {{Representing and Compressing MPEG-4 Facial Animation Parameters using Facial Action Basis Functions}},
journal = {IEEE Transactions on Circuits and Systems},
year = {1999},
volume = {9},
number = {3},
pages = {405--410},
}
A local-variable model yielding the statistics from the singlet state is presented for the case of inefficient detectors and/or lowered visibility. It has independent errors and the highest efficiency at perfect visibility is 77.80%, while the highest visibility at perfect detector-efficiency is 63.66%. The model cannot be refuted by measurements made to date.
@article{diva2:259459,
author = {Larsson, Jan-Åke},
title = {{Modeling the singlet state with local variables}},
journal = {Physics Letters A},
year = {1999},
volume = {256},
number = {4},
pages = {245--252},
}
The Greenberger-Horne-Zeilinger (GHZ) paradox is subject to the detector-efficiency “loophole” in a similar manner as the Bell inequality. In a paper by J.-Å. Larsson [Phys. Rev. A 57, R3145 (1998)], the issue is investigated for very general assumptions. Here, the assumptions of constant efficiency and independent errors will be imposed, and it will be shown that the necessary and sufficient efficiency bound is not lowered, but remains at 75%. An explicit local-variable model is constructed in this paper to show the necessity of this bound. In other words, it is not possible to use the independence of experimental nondetection errors to rule out local realism in the GHZ paradox below 75% efficiency.
@article{diva2:259460,
author = {Larsson, Jan-Åke},
title = {{Detector efficiency in the Greenberger-Horne-Zeilinger paradox:
independent errors}},
journal = {Physical Review A. Atomic, Molecular, and Optical Physics},
year = {1999},
volume = {59},
number = {6},
pages = {4801--4804},
}
The two-photon interferometric experiment proposed by J. D. Franson [Phys. Rev. Lett. 62, 2205 (1989)] is often treated as a “Bell test of local realism.” However, it has been suggested that this is incorrect due to the 50% postselection performed even in the ideal gedanken version of the experiment. Here we present a simple local hidden variable model of the experiment that successfully explains the results obtained in usual realizations of the experiment, even with perfect detectors. Furthermore, we also show that there is no such model if the switching of the local phase settings is done at a rate determined by the internal geometry of the interferometers.
@article{diva2:259458,
author = {Aerts, Sven and Kwiat, Paul and Larsson, Jan-Åke and Zukowski, Marek},
title = {{Two-photon Franson-type experiments and local realism}},
journal = {Physical Review Letters},
year = {1999},
volume = {83},
number = {15},
pages = {2872--2876},
}
In this paper, a method of generalizing the Bell inequality is presented that makes it possible to include detector inefficiency directly in the original Bell inequality. To enable this, the concept of “change of ensemble” will be presented, providing both qualitative and quantitative information on the nature of the “loophole” in the proof of the original Bell inequality. In a local hidden-variable model lacking change of ensemble, the generalized inequality reduces to an inequality that quantum mechanics violates as strongly as the original Bell inequality, irrespective of the level of efficiency of the detectors. A model that contains change of ensemble lowers the violation, and a bound for the level of change is obtained. The derivation of the bound in this paper is not dependent upon any symmetry assumptions such as constant efficiency, or even the assumption of independent errors.
@article{diva2:586507,
author = {Larsson, Jan-Åke},
title = {{Bell's inequality and detector inefficiency}},
journal = {Physical Review A. Atomic, Molecular, and Optical Physics},
year = {1998},
volume = {57},
number = {5},
pages = {3304--3308},
}
In this paper detector efficiency conditions are derived for the Greenberger-Horne-Zeilinger (GHZ) paradox. The conditions will be necessary and sufficient, i.e., the GHZ paradox is explicable in terms of a local-variable model if the efficiency is below the bounds, and the GHZ prerequisites are inconsistent at higher efficiencies. The derivation does not make use of any of the symmetry assumptions usually made in the literature, most notably the assumption of independent errors. The errors in local-hidden-variable models are governed by the “hidden variable” and, therefore, one cannot in general assume that the errors are independent. It will be shown that this assumption is not necessary. Moreover, bounds are presented that do not need the emission rate of particle triples to be known. An example of such a bound is the ratio of the triple coincidence rate and the double coincidence rate at two detectors, which needs to be higher than 75% to yield a contradiction.
@article{diva2:259461,
author = {Larsson, Jan-Åke},
title = {{Necessary and sufficient detector-efficiency conditions for the Greenberger-Horne-Zeilinger paradox}},
journal = {Physical Review A. Atomic, Molecular, and Optical Physics},
year = {1998},
volume = {57},
number = {5},
pages = {R3145--R3149},
}
Books
This book is a broad introduction to GPU computing, covering several different technologies for programming GPUs for general purpose computing, including CUDA, OpenCL, Compute Shaders and more.
@book{diva2:1282083,
author = {Ragnemalm, Ingemar},
title = {{Polygons Feel No Pain, So How Can We Make Them Scream When They Attack in packs:
A course book in GPU computing}},
publisher = {Ragnemalm Utveckling \& Underhållning},
year = {2018},
address = {Linköping},
}
This book covers a wide range of subjects that didn’t fit in the first volume. You will find more advanced computer graphics topics, but we also widen the scope to physics, networking and AI, thereby making this a book on interesting topics useful in game programming and similar tasks.
The book uses OpenGL 3.2 and up, that is modern OpenGL.
The topics that are covered include
- Stencil buffer
- Frame Buffer Objects
- Texture compression
- Planar shadows, shadow maps and shadow volumes
- mbient occlusion including screen-space ambient occlusion
- Bump mapping with extensions (Parallax mapping, Displacement mapping)
- Geometry shaders
- General-purpose GPU programming (GPGPU) with shaders and CUDA
- 3D display
- Rigid body dynamics
- Quaternions
- Integration methods
- Body animation, skinning
- Deformable models
- Networking
- Game AI
@book{diva2:1282086,
author = {Ragnemalm, Ingemar},
title = {{Polygons feel no pain : course book for TSBK03 Advanced game programming Volume 2:
So how can we make them scream?}},
publisher = {Ragnemalm Utveckling \& Underhållning},
year = {2017},
address = {Linköping},
}
This book is not about the most advanced topics of computer graphics. It does not stop at the mere basics, but a range of advanced topics (advanced shaders, shadows, game physics) are covered by volume 2.
The topics that are covered include
- 2D and 3D transformations
- 3D viewing • Light models and shading
- Visible surface detection
- Surface detail (texture mapping etc)
- Collision detection and handling
- Large world management (high level VSD, LOD, scene graphs)
- Shader programming (GLSL)
- Ray-tracing and radiosity
- Low-level algorithms (Bresenham, flood fill, scan conversion)
- Anti-aliasing
OpenGL 3.2 is used for real-life examples, but this is not an OpenGL book, it is a computer graphics book. For learning OpenGL as such, there are other books.
@book{diva2:1282084,
author = {Ragnemalm, Ingemar},
title = {{Polygons feel no pain:
A course book in Computer Graphics with OpenGL}},
publisher = {Ragnemalm Utveckling \& Underhållning},
year = {2017},
address = {Linköping},
}
@book{diva2:437295,
author = {Ragnemalm, Ingemar},
title = {{Polygons feel no pain. Vol. 2, ...So how can we make them scream?
course book for TSBK03 advanced game programming}},
publisher = {Tryckakademin},
year = {2008},
address = {Linköping},
}
Book chapters
We propose a conjugate logic that can capture the behavior of quantum and quantum-like systems. The proposal is similar to the more generic concept of epistemic logic: it encodes knowledge or perhaps more correctly, predictions about outcomes of future observations on some systems. For a quantum system, these predictions are statements about future outcomes of measurements performed on specific degrees of freedom of the system. The proposed logic will include propositions and their relations, including connectives, but importantly also transformations between propositions on conjugate degrees of freedom of the systems. A key point is the addition of a transformation that allows to convert propositions about single systems into propositions about correlations between systems. We will see that subtle choices of the properties of the transformations lead to drastically different underlying mathematical models; one choice gives stabilizer quantum mechanics, while another choice gives Spekkens’ toy theory. This points to a crucial basic property of quantum and quantum-like systems that can be handled within the present conjugate logic by adjusting the mentioned choice. It also enables a discussion on what behaviors are properly quantum or only quantum-like, relating to that choice and how it manifests in the system under scrutiny.
@incollection{diva2:1798470,
author = {Johansson, Niklas and Huber, Felix and Larsson, Jan-Åke},
title = {{Conjugate Logic}},
booktitle = {The Quantum-Like Revolution},
year = {2023},
pages = {157--180},
publisher = {Springer},
address = {Cham},
}
Bell inequality tests of local realism are notoriously difficult to perform. Physicists have attempted these tests for more than 50 years, and for each attempt, gotten closer and closer to a proper test. So far, every test performed has been riddled by one or more loopholes.
@incollection{diva2:1643572,
author = {Larsson, Jan-Åke},
title = {{How to avoid the coincidence loophole}},
booktitle = {Quantum [Un]Speakables II},
year = {2017},
pages = {273--290},
publisher = {Springer},
address = {Cham},
}
"Risks in Technological Systems" is an interdisciplinary university textbook and a book for the educated reader on the risks of today’s society. In order to understand and analyze risks associated with the engineering systems on which modern society relies, other concerns have to be addressed, besides technical aspects. In contrast to many academic textbooks dealing with technological risks, this book has a unique interdisciplinary character that presents technological risks in their own context. Twenty-four scientists have come together to present their views on risks in technological systems. Their scientific disciplines cover not only engineering, economics and medicine, but also history, psychology, literature and philosophy. Taken together these contributions provide a broad, but accurate, interdisciplinary introduction to a field of increasing global interest, as well as rich opportunities to achieve in-depth knowledge of the subject.
@incollection{diva2:280287,
author = {Fåk, Viiveke},
title = {{IT - Risks and Security}},
booktitle = {Risks in Technological Systems},
year = {2010},
pages = {143--160},
publisher = {Springer-Verlag},
address = {London},
}
The chapter reports the use of organic electrochemical transistors in sensor applications. These transistors are excellent ion-to-electron transducers and can serve as very sensitive transducers in amperometric sensor applications. To further improve their sensitivity, we outline various amplification circuits all realized in organic electrochemical transistors.
@incollection{diva2:471819,
author = {Berggren, Magnus and Forchheimer, Robert and Bobacka, Johan and Svensson, Per-Olof and Nilsson, David and Larsson, Oscar and Ivaska, Ari},
title = {{PEDOT:
PSS-Based Electrochemical Transistors for Ion-to-Electron Transduction and Sensor Signal Amplification}},
booktitle = {Organic Semiconductors in Sensor Applications},
year = {2008},
pages = {263--280},
publisher = {Springer},
}
An experimeotally observed violation of Bell's inequality i.s suppooed to show the failuro of local realism to deal with quantum roality. However, finite statistics and the Lime sequential nature of real experiments still allows a loophole for local roalism. We show that the raodomised design of the Aspect experiment closes this loophole. Our main tool is van de Geer's (1995, 2000) martingale version of the classical Bernstein (1924) incquality guaranteeing, at the root n scale, a not-beavier-than-Gaussian tail of the distribution of a sum of bouoded supermartingale dilferences. The results are used to specify a protocol for a public bet between the author and L. Accardi, who in recont papers (Aocardi and Regal.i, 2000a, b, 2001; Accardi, lmafuku and Regoli, 2002) has claimed to have produced a suite of computer programmes, to be run on a network of computers, wbich will simulate a violation of Bell's inequalites. At a sarnple size of twenty five thousand, botb error probabilities are guaranteed smaller than about one in a million, provided we adhere to tho sequential randomized design while Accardi aims for tbe greatest possible violation allowed by quantum mechanics.
@incollection{diva2:259441,
author = {Gill, Richard D. and Larsson, Jan-Åke},
title = {{Accardi contra Bell (cum mundi):
The Impossible Coupling}},
booktitle = {Mathematical Statistics and Applications},
year = {2003},
pages = {133--154},
publisher = {Institute of Mathematical Statistic},
address = {Hayward, CA},
}
A digital fingerprint is a unique pattern embedded in a digital document to be able to identify a specific copy when it is used illegally. We have looked at two specific code structures for fingerprinting purpose. Binary linear codes, often used as error correcting codes, and what we call a binary sorted code.
@incollection{diva2:270020,
author = {Lindkvist, Tina},
title = {{Characteristics of some binary codes for fingerprinting}},
booktitle = {Information Security},
year = {2000},
pages = {97--107},
publisher = {Springer Berlin/Heidelberg},
}
Conference papers
Homomorphic encryption (HE) allows computations on encrypted data, leaking neither the input nor the computational output. While the method has historically been infeasible to use in practice, due to recent advancements, HE has started to be applied in real-world applications. Motivated by the possibility of outsourcing heavy computations to the cloud and still maintaining end-to-end security, in this paper, we use HE to design a basic audio conferencing application and demonstrate that our design approach (including some advanced features) is both practical and scalable. First, by homomorphically mixing encrypted audio in an untrusted, honest-but-curious server, we demonstrate the practical use of HE in audio communication. Second, by using multiplication operations, we go beyond the purely additive audio mixing and implement advanced example features capable of handling server-side mute and breakout rooms without the cloud server being able to extract sensitive user-specific metadata. Whereas the encryption and decryption times are shown to be magnitudes slower than generic AES encryption and roughly ten times slower than Signal's AES implementation, our solution approach is scalable and achieves end-to-end encryption while keeping performance well within the bounds of practical use. Third, besides studying the performance aspects, we also objectively evaluate the perceived audio quality, demonstrating that this approach also achieves excellent audio quality. Finally, our comprehensive evaluation and empirical findings provide valuable insights into the tradeoffs between HE schemes, their security configurations, and audio parameters. Combined, our results demonstrate that audio mixing using HE (including advanced features) now can be made both practical and scalable.
@inproceedings{diva2:1830171,
author = {Hasselquist, David and Johansson, Niklas and Carlsson, Niklas},
title = {{Now is the Time: Scalable and Cloud-supported Audio Conferencing using End-to-End Homomorphic Encryption}},
booktitle = {PROCEEDINGS OF THE 2023 CLOUD COMPUTING SECURITY WORKSHOP, CCSW 2023},
year = {2023},
pages = {41--53},
publisher = {ASSOC COMPUTING MACHINERY},
}
We study a secure integrated sensing and communication (ISAC) model motivated by the need to simultaneously exploit the sensitive attributes of wireless devices, such as their location, and communicate securely. Specifically, we consider a state-dependent binary-input two-user additive white Gaussian noise (AWGN) broadcast channel, in which the channel state sequence consists of two components, each affecting a receiver, modeled as independent and identically distributed (i.i.d.) correlated phase shifts to approximate the location-dependent signatures of the receivers. The objective of the transmitter is to simultaneously estimate the channel states while reliably transmitting a secret message to one of the receivers, treating the other as a passive attacker. We characterize the exact secrecy-distortion region when 1) the channel output feedback is perfect, i.e., noiseless with a unit time delay; and 2) the channel is degraded. The characterized rate region offers an outer bound for more complex secure ISAC settings with noisy generalized output feedback and non-degraded channels. We also characterize the secrecy-distortion region for reversely-degraded channels. The results illustrate the benefits of jointly sensing the channel state and securely communicating messages as compared to separation-based methods.
@inproceedings{diva2:1792455,
author = {Günlü, Onur and Bloch, Matthieu and Schaefer, Rafael F. and Yener, Aylin},
title = {{Secure Integrated Sensing and Communication for Binary Input Additive White Gaussian Noise Channels}},
booktitle = {2023 IEEE 3RD INTERNATIONAL SYMPOSIUM ON JOINT COMMUNICATIONS \& SENSING, JC\&S},
year = {2023},
publisher = {IEEE},
}
Providing authenticated interactions is a key responsibility of most cryptographic protocols. When designing new protocols with strict security requirements it is therefore essential to formally verify that they fulfil appropriate authentication properties. We identify a gap in the case of protocols with unilateral (one-way) authentication, where existing properties are poorly adapted. In existing work, there is a preference for defining strong authentication properties, which is good in many cases but not universally applicable. In this work we make the case for weaker authentication properties. In particular, we investigate one-way authentication and extend Lowe's authentication hierarchy with two such properties. We formally prove the relationship between the added and existing properties. Moreover, we demonstrate the usefulness of the added properties in a case study on remote attestation protocols. This work complements earlier work with additional generic properties that support formal verification of a wider set of protocol types.
@inproceedings{diva2:1789863,
author = {Wilson, Johannes and Asplund, Mikael and Johansson, Niklas},
title = {{Extending the Authentication Hierarchy with One-Way Agreement}},
booktitle = {2023 IEEE 36th Computer Security Foundations Symposium (CSF)},
year = {2023},
series = {Proceedings - IEEE Computer Security Foundations Symposium (CSF)},
pages = {214--228},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
}
Small neural networks (NNs) used for error correction were shown to improve on classic channel codes and to address channel model changes. We extend the code dimension of any such structure by using the same NN under one-hot encoding multiple times, then serially-concatenated with an outer classic code. We design NNs with the same network parameters, where each Reed-Solomon codeword symbol is an input to a different NN. Significant improvements in block error probabilities for an additive Gaussian noise channel as compared to the small neural code are illustrated, as well as robustness to channel model changes.
@inproceedings{diva2:1772962,
author = {Günlü, Onur and Fritschek, Rick and Schaefer, Rafael F.},
title = {{Concatenated Classic and Neural (CCN) Codes: ConcatenatedAE}},
booktitle = {2023 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE, WCNC},
year = {2023},
series = {IEEE Wireless Communications and Networking Conference},
publisher = {IEEE},
}
A quantum random number generator based on few-mode fiber technology is presented. The randomness originates from measurements of spatial modal quantum superpositions of the LP11a and LP11b modes. The generated sequences have passed NIST tests.
@inproceedings{diva2:1797403,
author = {Alarcon, Alvaro and Argillander, Joakim and Spegel-Lexne, Daniel and Xavier, Guilherme B.},
title = {{Quantum Random Number Generation Based on Spatial Modal Superposition over Few-Mode-Fibers}},
booktitle = {Frontiers in Optics + Laser Science 2022 (FIO, LS)},
year = {2022},
series = {Frontiers in Optics + Laser Science 2022 (FIO, LS)},
publisher = {Optica Publishing Group},
}
Federated learning (FL) enables large-scale machine learning with user data privacy due to its decentralized structure. However, the user data can still be inferred via the shared model updates. To strengthen the privacy, we consider FL with local differential privacy (LDP). One of the challenges in FL is its huge communication cost caused by iterative transmissions of model updates. It has been relieved by quantization in the literature, however, there have been not many works that consider its effect on LDP and the unboundedness of the randomized model updates. We propose a communication-efficient FL algorithm with LDP that uses a Gaussian mechanism followed by quantization and the Elias-gamma coding. A novel design of the algorithm guarantees LDP even after the quantization. Under the proposed algorithm, we provide a trade-off analysis of privacy and communication costs theoretically: quantization reduces the communication costs but requires a larger perturbation to enable LDP. Experimental results show that the accuracy is mostly affected by the noise from LDP mechanisms, and it becomes enhanced when the quantization error is larger. Nonetheless, our experimental results enabled LDP with a significant compression ratio and only a slight reduction of accuracy in return. Furthermore, the proposed algorithm outperforms another algorithm with a discrete Gaussian mechanism under the same privacy budget and communication costs constraints in the experiments.
@inproceedings{diva2:1755699,
author = {Kim, Muah and Günlü, Onur and Schaefer, Rafael F.},
title = {{Effects of Quantization on Federated Learning with Local Differential Privacy}},
booktitle = {2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022)},
year = {2022},
series = {IEEE Global Communications Conference},
pages = {921--926},
publisher = {IEEE},
}
The problem of secure source coding with multiple terminals is extended by considering a remote source whose noisy measurements are the correlated random variables used for secure source reconstruction. The main additions to the problem include 1) all terminals noncausally observe a noisy measurement of the remote source; 2) a private key is available to all legitimate terminals; 3) the public communication link between the encoder and decoder is rate-limited; and 4) the secrecy leakage to the eavesdropper is measured with respect to the encoder input, whereas the privacy leakage is measured with respect to the remote source. Exact rate regions are characterized for a lossy source coding problem with a private key, remote source, and decoder side information under security, privacy, communication, and distortion constraints. By replacing the distortion constraint with a reliability constraint, we obtain the exact rate region also for the lossless case. Furthermore, the lossy rate region for scalar discrete-time Gaussian sources and measurement channels is established.
@inproceedings{diva2:1736520,
author = {Günlü, Onur and Schaefer, Rafael F. and Boche, Holger and Poor, H. Vincent},
title = {{Secure and Private Source Coding with Private Key and Decoder Side Information}},
booktitle = {2022 IEEE INFORMATION THEORY WORKSHOP (ITW)},
year = {2022},
series = {Information Theory Workshop},
pages = {226--231},
publisher = {IEEE},
}
In this work we demonstrate an all-fiber dynamically tunable beamsplitter based on a Sagnac interferometer capable of realizing measurement-device independent protocols for certifying the privacy of the generated sequence.
@inproceedings{diva2:1723675,
author = {Argillander, Joakim and Alarcon, Alvaro and Xavier, Guilherme B.},
title = {{All-fiber Dynamically Tunable Beamsplitter for Quantum Random Number Generators}},
booktitle = {Latin America Optics and Photonics Conference},
year = {2022},
publisher = {Optica Publishing Group},
}
We demonstrate an all-fiber platform for the generation and detection of spatial photonic states where combinations of LP01, LP11a and LP11b modes are used. This scheme can be employed for quantum communication applications.
@inproceedings{diva2:1653491,
author = {Alarcon, Alvaro and Argillander, Joakim and Xavier, Guilherme B.},
title = {{Creating Spatial States of Light for Quantum Information with Photonic Lanterns}},
booktitle = {Applied Industrial Optics 2021},
year = {2021},
series = {OSA Technical Digest},
publisher = {Optical Society of America},
}
We show that telecom few-mode fiber Mach-Zehnder interferometers can be used for quantum communication protocols where the LP01 and LP11a modes are employed to encode spatial qubits.
@inproceedings{diva2:1653489,
author = {Alarcon, Alvaro and Xavier, Guilherme B.},
title = {{A few-mode fiber Mach-Zehnder interferometer for quantum communication applications}},
booktitle = {Frontiers in Optics / Laser Science},
year = {2020},
series = {OSA Technical Digest},
publisher = {Optical Society of America},
}
Cellulose-based helices retrieved from the plant celery with a conductive poly(4-(2,3-dihydrothieno [3,4-b]-[1,4]dioxin-2-yl-methoxy)-1-butanesulfonate (PEDOT-S). A resonance close to 1 THz and a broad shoulder that extends to 3.5 THz was obtained, consistent with electromagnetic models. As helical antennas, it was shown that both axial and normal modes are present, which are correlated to the orientation and antenna electrical lengths of the coated helices. This work opens the possibility of designing tunable terahertz antennas through simple control of their dimensions and orientation. © 2019 IEEE.
@inproceedings{diva2:1463486,
author = {Ponseca, Carlito and Elfwing, Anders and Ouyang, Liangqi and Urbanowicz, Andrzej and Krotkus, Arunas and Tu, Deyu and Forchheimer, Robert and Inganäs, Olle},
title = {{Terahertz Helical Antenna Based on Celery Stalks}},
booktitle = {International Conference on Infrared, Millimeter, and Terahertz Waves, IRMMW-THz},
year = {2019},
series = {International Conference on Infrared, Millimeter, and Terahertz Waves, IRMMW-THz},
volume = {19},
publisher = {IEEE Computer Society},
}
The concept of mission-based driving cycles has been introduced as an efficient way of generating driving cycles with desired characteristics for data-driven development of vehicle powertrains. Mission-based driving cycles can be generated using traffic simulation tools with improved behavioral models that match simulation outputs and naturalistic driving data. Here, driving behavior categorization and how it can be used to create a set of differently parameterized behavioral models corresponding to various types of drivers, are studied. The focus is on curvy road driving, and two different categorization features are used, speed through the curves and the braking behavior.
@inproceedings{diva2:1426729,
author = {Kharrazi, Sogol and Frisk, Erik and Nielsen, Lars},
title = {{Driving Behavior Categorization and Models for Generation of Mission-based Driving Cycles}},
booktitle = {2019 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC)},
year = {2019},
series = {IEEE International Conference on Intelligent Transportation Systems-ITSC},
pages = {1349--1354},
publisher = {IEEE},
}
Spline interpolation is widely used in many different applications like computer graphics, animations and robotics. Many of these applications are run in real-time with constraints on computational complexity, thus fueling the need for computational inexpensive, real-time, continuous and loop-free data interpolation techniques. Often Catmull- Rom splines are used, which use four control-points: the two points between which to interpolate as well as the point directly before and the one directly after. If interpolating over time, this last point will ly in the future. However, in real-time applications future values may not be known in advance, meaning that Catmull-Rom splines are not applicable. In this paper we introduce another family of interpolation splines (dubbed Three-Point-Splines) which show the same characteristics as Catmull-Rom, but which use only three control-points, omitting the one “in the future”. Therefore they can generate smooth interpolation curves even in applications which do not have knowledge of future points, without the need for more computational complex methods. The generated curves are more rigid than Catmull-Rom, and because of that the Three-Point-Splines will not generate self-intersections within an interpolated curve segment, a property that has to be introduced to Catmull-Rom by careful parameterization. Thus, the Three-Point-Splines allow for greater freedom in parameterization, and can therefore be adapted to the application at hand, e.g. to a requested curvature or limitations on acceleration/deceleration. We will also show a method that allows to change the control-points during an ongoing interpolation, both with Thee-Point-Splines as well as with Catmull-Rom splines.
@inproceedings{diva2:1371363,
author = {Ogniewski, Jens},
title = {{Cubic Spline Interpolation in Real-Time Applications using Three Control Points}},
booktitle = {Proceedings of International Conference in Central Europe on Computer Graphics, Visualization and ComputerVision'2019},
year = {2019},
series = {Computer Science Research Notes},
volume = {2901},
pages = {1--10},
publisher = {World Society for Computer Graphics},
}
We measure spontaneous Raman scattering (SRS) effects in C-band and observe trench-assisted MCF is robust to SRS noise, making it possible to run quantum channels in the neighboring and/or the same core as data channels.
@inproceedings{diva2:1334839,
author = {Lin, R. and Gan, L. and Udalcovs, A. and Ozolins, O. and Pang, X. and Shen, L. and Popov, Sergei and Tang, M. and Fu, S. and Tong, W. and Liu, D. and Ferreira da Silva, T. and Xavier, Guilherme B and Chen, J.},
title = {{Spontaneous Raman Scattering Effects in Multicore Fibers: Impact on Coexistence of Quantum and Classical Channels}},
booktitle = {2019 OPTICAL FIBER COMMUNICATIONS CONFERENCE AND EXHIBITION (OFC)},
year = {2019},
publisher = {IEEE},
}
The digital currency Bitcoin has had remarkable growth since it was first proposed in 2008. Its distributed nature allows currency transactions without a central authority by using cryptographic methods and a data structure called the blockchain. Imagine that you could run the Bitcoin protocol on a quantum computer. What advantages can be had over classical Bitcoin? This is the question we answer here by introducing Quantum Bitcoin which, among other features, has immediate local verification of transactions. This is a major improvement over classical Bitcoin since we no longer need the computationally-intensive and time-consuming method of recording all transactions in the blockchain. Quantum Bitcoin is the first distributed quantum currency, and this paper introduces the necessary tools including a novel two-stage quantum mining process. In addition, we have counterfeiting resistance, fully anonymous and free transactions, and a smaller footprint than classical Bitcoin.
@inproceedings{diva2:936324,
author = {Jogenfors, Jonathan},
title = {{Quantum Bitcoin:
An Anonymous, Distributed, and Secure Currency Secured by the No-Cloning Theorem of Quantum Mechanics}},
booktitle = {2019 IEEE International Conference on Blockchain and Cryptocurrency (ICBC)},
year = {2019},
publisher = {IEEE},
}
Representative driving cycles are of key importance for design and dimensioning of powertrains. One approach for generation of representatives driving cycles is to define relevant driving missions which include different street types, obstacles and traffic conditions, and simulate them in a traffic simulation tool. Such a simulation approach will also require representative driver models to generate the speed profiles for the defined driving missions. Feasibility of this approach is investigated in this paper.
@inproceedings{diva2:1332800,
author = {Kharrazi, Sogol and Nielsen, Lars and Frisk, Erik},
title = {{Design cycles for a given driving mission}},
booktitle = {DYNAMICS OF VEHICLES ON ROADS AND TRACKS, VOL 1},
year = {2018},
pages = {323--328},
publisher = {CRC PRESS-TAYLOR \& FRANCIS GROUP},
}
In this talk, we discuss integrating the quantum key distribution (QKD) with the spatial division multiplexing (SDM) enabled optical communication network for the cyber security.
@inproceedings{diva2:1297723,
author = {Lin, Rui and Udalcovs, Aleksejs and Ozolins, Oskars and Pang, Xiaodan and Gan, Lin and Shen, Li and Tang, Ming and Fu, Songnian and Yang, Chen and Tong, Weijun and Liu, Deming and da Silva, Thiago Ferreira and Xavier, Guilherme B and Chen, Jiajia},
title = {{Integrating Quantum Key Distribution with the Spatial Division Multiplexing Enabled High Capacity Optical Networks}},
booktitle = {2018 ASIA COMMUNICATIONS AND PHOTONICS CONFERENCE (ACP)},
year = {2018},
series = {Asia Communications and Photonics Conference and Exhibition},
publisher = {IEEE},
}
We present a method for estimating the location of edges in binary images, in order to correct the distance produced by distance transforms to sub-pixel precision. We also show that the resulting precision significantly outper- forms the precision of an uncorrected Euclidean Distance Transform.
@inproceedings{diva2:1282088,
author = {Ragnemalm, Ingemar},
title = {{A sub-pixel distance correction method for distance transforms of binary images.}},
booktitle = {Swedish Symposium on Image Analysis},
year = {2018},
publisher = {SSBA},
address = {Stockholm},
}
This paper presents a hierarchical interconnect architecture for optical data center networks composed of specially designed couplers allowing for significant reduction of required spectral resources. © OSA 2017.
@inproceedings{diva2:1463471,
author = {Wiatr, P. and Forchheimer, Robert and Furdek, M. and Chen, J. and Wosinska, L. and Yuan, D.},
title = {{Hierarchical optical interconnects saving spectrum resources in data center networks}},
booktitle = {Optics InfoBase Conference Papers},
year = {2017},
publisher = {Optical Society of America},
}
We describe the implementation of a non-contact motion encoder based on the Near-Sensor Image Processing (NSIP) concept. Rather than computing image displacements between frames we search for LEP stability as used successfully in a previously published Time-to-Impact detector. A LEP is a single pixel feature that is tracked during its motion. It is found that this results in a non-complex and fast implementation. As with other NSIP-based solutions, high dynamic range is obtained as the sensor adapts itself to the lighting conditions. © 2017, Society for Imaging Science and Technology.
@inproceedings{diva2:1463470,
author = {Åström, Anders and Forchheimer, Robert},
title = {{Fast, low-complex, non-contact motion encoder based on the NSIP concept}},
booktitle = {IS and T International Symposium on Electronic Imaging Science and Technology},
year = {2017},
series = {IS and T International Symposium on Electronic Imaging Science and Technology},
pages = {91--95},
publisher = {Society for Imaging Science and Technology},
}
With depth sensors becoming more and more common, and applications with varying viewpoints (like e.g. virtual reality) becoming more and more popular, there is a growing demand for real-time depth-image-based-rendering algorithms that reach a high quality. Starting from a quality-wise top performing depth-image-based-renderer, we develop a real-time version. Despite reaching a high quality as well, the new OpenGL-based renderer decreases runtime by (at least) 2 magnitudes. This was made possible by discovering similarities between forward-based and mesh-based rendering, which enable us to remove the common parallelization bottleneck of competing memory access, and facilitated by the implementation of accurate yet fast algorithms for the different parts of the rendering pipeline. We evaluated the proposed renderer using a publicly available dataset with ground-truth depth and camera data, that contains both rapid camera movements and rotations as well as complex scenes and is therefore challenging to project accurately.
@inproceedings{diva2:1371379,
author = {Ogniewski, Jens},
title = {{High-Quality Real-Time Depth-Image-Based-Rendering}},
booktitle = {Proceedings of SIGRAD 2017, August 17-18, 2017 Norrköping, Sweden},
year = {2017},
series = {Linköping Electronic Conference Proceedings},
volume = {143},
pages = {1--8},
publisher = {Linköping University Electronic Press},
}
More and more devices have depth sensors, making RGB+D video (colour+depth video) increasingly common. RGB+D video allows the use of depth image based rendering (DIBR) to render a given scene from different viewpoints, thus making it a useful asset in view prediction for 3D and free-viewpoint video coding. In this paper we evaluate a multitude of algorithms for scattered data interpolation, in order to optimize the performance of DIBR for video coding. This also includes novel contributions like a Kriging refinement step, an edge suppression step to suppress artifacts, and a scale-adaptive kernel. Our evaluation uses the depth extension of the Sintel datasets. Using ground-truth sequences is crucial for such an optimization, as it ensures that all errors and artifacts are caused by the prediction itself rather than noisy or erroneous data. We also present a comparison with the commonly used mesh-based projection.
@inproceedings{diva2:1253223,
author = {Ogniewski, Jens and Forss\'{e}n, Per-Erik},
title = {{Pushing the Limits for View Prediction in Video Coding}},
booktitle = {PROCEEDINGS OF THE 12TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISIGRAPP 2017), VOL 4},
year = {2017},
pages = {68--76},
publisher = {SCITEPRESS},
}
It has been a challenging task for organic electronic devices to control and convert electric power due to their vulnerability when exposed to high voltages. In this work, the design, fabrication, and applications of AC-DC and DC-AC organic power converters based on high-voltage organic thin-film transistors are presented. The organic AC-DC converters, comprising diode-configured high voltage organic thin-film transistors as the rectifying unit, is capable of transforming a high AC input voltage to a selectable DC voltage. On the contrary, the organic DC-AC converter, using an astable multivibrator as oscillation generator, is capable of converting a high DC voltage to a high AC voltage as a power inverter. The functionality of the organic AC-DC power converter is demonstrated through charging supercapacitors as a quasi-constant current supply and, in addition, the successful driving of organic light-emitting devices to high luminescence and efficiency. Expanding the application of organic thin-film transistors into power conversion paves the road towards organic power electronics for cost-efficient and Eco-friendly power electronics in the future.
@inproceedings{diva2:1197271,
author = {Tu, Deyu},
title = {{Organic Power Converters: Design, Fabrication, and Applications}},
booktitle = {2017 24TH INTERNATIONAL WORKSHOP ON ACTIVE-MATRIX FLATPANEL DISPLAYS AND DEVICES (AM-FPD)},
year = {2017},
pages = {77--80},
publisher = {IEEE},
}
John Bells theorem of 1964 states that local elements of physical reality, existing independent of measurement, are inconsistent with the predictions of quantum mechanics (Bell, J. S. (1964), Physics (College. Park. Md). 1 (3), 195). Specifically, correlations between measurement results from distant entangled systems would be smaller than predicted by quantum physics. This is expressed in Bells inequalities. Employing modifications of Bells inequalities, many experiments have been performed that convincingly support the quantum predictions. Yet, all experiments rely on assumptions, which provide loopholes for a local realist explanation of the measurement. Here we report an experiment with polarization-entangled photons that simultaneously closes the most significant of these loopholes. We use a highly efficient source of entangled photons, distributed these over a distance of 58.5 meters, and implemented rapid random setting generation and high-efficiency detection to observe a violation of a Bell inequality with high statistical significance. The merely statistical probability of our results to occur under local realism is less than 3.74 . 10(-31), corresponding to an 11.5 standard deviation effect.
@inproceedings{diva2:1171632,
author = {Giustina, Marissa and Versteegh, Marijn A. M. and Wengerowsky, Soeren and Handsteiner, Johannes and Hochrainer, Armin and Phelan, Kevin and Steinlechner, Fabian and Kofler, Johannes and Larsson, Jan-Åke and Abellan, Carlos and Amaya, Waldimar and Mitchell, Morgan W. and Beyer, Joern and Gerrits, Thomas and Lita, Adriana E. and Shalm, Lynden K. and Woo Nam, Sae and Scheidl, Thomas and Ursin, Rupert and Wittmann, Bernhard and Zeilinger, Anton},
title = {{A Significant-Loophole-Free Test of Bells Theorem with Entangled Photons}},
booktitle = {QUANTUM INFORMATION SCIENCE AND TECHNOLOGY III},
year = {2017},
series = {Proceedings of SPIE},
publisher = {SPIE-INT SOC OPTICAL ENGINEERING},
}
Many of the latest smart phones and tablets come with integrated depth sensors, that make depth-maps freely available, thus enabling new forms of applications like rendering from different view points. However, efficient compression exploiting the characteristics of depth-maps as well as the requirements of these new applications is still an open issue. In this paper, we evaluate different depth-map compression algorithms, with a focus on tree-based methods and view projection as application.
The contributions of this paper are the following: 1. extensions of existing geometric compression trees, 2. a comparison of a number of different trees, 3. a comparison of them to a state-of-the-art video coder, 4. an evaluation using ground-truth data that considers both depth-maps and predicted frames with arbitrary camera translation and rotation.
Despite our best efforts, and contrary to earlier results, current video depth-map compression outperforms tree-based methods in most cases. The reason for this is likely that previous evaluations focused on low-quality, low-resolution depth maps, while high-resolution depth (as needed in the DIBR setting) has been ignored up until now. We also demonstrate that PSNR on depth-maps is not always a good measure of their utility.
@inproceedings{diva2:1150797,
author = {Ogniewski, Jens and Forss\'{e}n, Per-Erik},
title = {{What is the best depth-map compression for Depth Image Based Rendering?}},
booktitle = {Computer Analysis of Images and Patterns},
year = {2017},
series = {Lecture Notes in Computer Science},
volume = {10425},
pages = {403--415},
publisher = {Springer},
}
Groups of agents in multi-agent systems may have to cooperate to solve tasks efficiently, and coordinating such groups is an important problem in the field of artificial intelligence. In this paper, we consider the problem of forming disjoint coalitions and assigning them to independent tasks simultaneously, and present an anytime algorithm that efficiently solves the simultaneous coalition structure generation and task assignment problem. This NP-complete combinatorial optimization problem has many real-world applications, including forming cross-functional teams aimed at solving tasks. To evaluate the algorithm's performance, we extend established methods for synthetic problem set generation, and benchmark the algorithm using randomized data sets of varying distribution and complexity. Our results show that the presented algorithm efficiently finds optimal solutions, and generates high quality solutions when interrupted prior to finishing an exhaustive search. Additionally, we apply the algorithm to solve the problem of assigning agents to regions in a commercial computer-based strategy game, and empirically show that our algorithm can significantly improve the coordination and computational efficiency of agents in a real-time multi-agent system.
@inproceedings{diva2:1148303,
author = {Präntare, Fredrik and Ragnemalm, Ingemar and Heintz, Fredrik},
title = {{An Algorithm for Simultaneous Coalition Structure Generation and Task Assignment}},
booktitle = {PRIMA 2017: Principles and Practice of Multi-Agent Systems 20th International Conference, Nice, France, October 30 -- November 3, 2017, Proceedings},
year = {2017},
series = {Lecture Notes in Computer Science},
volume = {10621},
pages = {514--522},
publisher = {Springer},
address = {Cham},
}
Distance Transformations are usually defined for binary images, and therefore subject to sampling noise. This paper presents a distance correction that can be applied to any existing distance transform algorithm to achieve sub-pixel accuracy, and evaluates the result. Measurements display a significant improvement in precision.
@inproceedings{diva2:1129128,
author = {Ragnemalm, Ingemar},
title = {{Towards Sub-pixel Distance Measures for Distance Transformations}},
booktitle = {PROCEEDINGS OF THE INTERNATIONAL CONFERENCES COMPUTER GRAPHICS, VISUALIZATION, COMPUTER VISION AND IMAGE PROCESSING 2017 and BIG DATA ANALYTICS, DATA MINING AND COMPUTATIONAL INTELLIGENCE 2017},
year = {2017},
pages = {351--353},
}
The anti-aliased Euclidean distance transform is a recent development that redefines the distance transform concept, in particular the concept of precision and correctness, and has been shown to benefit certain applications. This paper presents and evaluates new versions of the anti-aliased distance transform. Our vector-based version simplifies the algorithm while providing a richer output in the form of vector data, with no measurable degradation in quality compared to the algorithm it is based on. Finally, we use a new method for measuring errors based on generating exact ground truth images.
@inproceedings{diva2:1129127,
author = {Ragnemalm, Ingemar},
title = {{New Algoritms for Anti-Aliased Distance Transformations}},
booktitle = {PROCEEDINGS OF THE INTERNATIONAL CONFERENCES COMPUTER GRAPHICS, VISUALIZATION, COMPUTER VISION AND IMAGE PROCESSING 2017 and BIG DATA ANALYTICS, DATA MINING AND COMPUTATIONAL INTELLIGENCE 2017},
year = {2017},
pages = {63--70},
publisher = {IADIS Press},
}
We present a current supply, comprising a single organic thin-film transistor (OTFT), for the charging of supercapacitors. The current supply takes power from the electric grid (115 V AC, US standard), converts the AC voltage to a quasi-constant DC current (similar to 0.1 mA) regardless of the impedance of the load, and charges the supercapacitor. Solution-processed OTFTs based on the popular polymeric semiconductor poly(3-hexylthiophene- 2,5-diyl) have been developed to rectify the 115 V AC voltage. A diodeconfigured OTFT was used as a half-wave rectifier. The single OTFT current supply was demonstrated to charge a 220 mF supercapacitor to 1 V directly using 115 V AC voltage as the input. This work paves the road towards all-printable supercapacitor energy-storage systems with integrated chargers, which enable direct charging from a power outlet.
@inproceedings{diva2:1135158,
author = {Keshmiri, Vahid and Larsen, C. and Edman, L. and Forchheimer, Robert and Tu, Deyu},
title = {{A Current Supply with Single Organic Thin-Film Transistor for Charging Supercapacitors}},
booktitle = {THIN FILM TRANSISTORS 13 (TFT 13)},
year = {2016},
series = {ECS Transactions},
pages = {217--222},
publisher = {ELECTROCHEMICAL SOC INC},
}
We report an experimental violation of a Bell inequality with strong statistical significance. Our experiment employs polarization measurements on entangled single photons and closes the locality, freedom-of-choice, fair-sampling, coincidence-time, and memory loopholes simultaneously.
@inproceedings{diva2:1074335,
author = {Giustina, M. and Versteegh, M. A. M. and Wengerowsky, S. and Handsteiner, J. and Hochrainer, A. and Phelan, K. and Steinlechner, F. and Kofler, J. and Larsson, Jan-Åke and Abellan, C. and Amaya, W. and Pruneri, V. and Mitchell, M. W. M. and Beyer, J. and Gerrits, T. and Lita, A. and Shalm, L. K. and Nam, S. W. and Scheidl, T. and Ursin, R. and Wittmann, B. and Zeilinger, A.},
title = {{Significant-Loophole-Free Test of Local Realism with Entangled Photons}},
booktitle = {2016 CONFERENCE ON LASERS AND ELECTRO-OPTICS (CLEO)},
year = {2016},
series = {Conference on Lasers and Electro-Optics},
publisher = {IEEE},
}
In this paper, we investigate using OECTs in differential amplifiers and cell voltage equalizers for supercapacitor balancing circuits. The differential amplifier based on OECTs can sense voltage difference and the voltage equalizer consisting of a microcontroller and OECTs can be used to charge supercapacitors to desired voltages.
@inproceedings{diva2:1071671,
author = {Keshmiri, Vahid and Forchheimer, Robert and Tu, Deyu and Westerberg, David and Sandberg, Mats},
title = {{The Applications of OECTs in Supercapacitor Balancing Circuits}},
booktitle = {2016 7TH INTERNATIONAL CONFERENCE ON COMPUTER AIDED DESIGN FOR THIN-FILM TRANSISTOR TECHNOLOGIES (CAD-TFT)},
year = {2016},
pages = {23--23},
publisher = {IEEE},
}
The linear-drift memristor model, suggested by HP Labs a few years ago, is used in this work together with two window functions. From the equations describing the memristor model, the transfer characteristics of a memristor is formulated and analyzed. A first-order estimation of the cut-off frequency is shown, that illustrates the bandwidth limitation of the memristor and how it varies with some of its physical parameters. The design space is elaborated upon and it is shown that the state speed, the variation of the doped and undoped regions of the memristor, is inversely proportional to the physical length, and depth of the device. The transfer characteristics is simulated for Joglekar-Wolf, and Biolek window functions and the results are analyzed. The Joglekar-Wolf window function causes a distinct behavior in the tranfer characteristics at cut-off frequency. The Biolek window function on the other hand gives a smooth state transfer function, at the cost of loosing the one-to-one mapping between charge and state. We also elaborate on the design constraints derived from the transfer characteristics.
@inproceedings{diva2:974174,
author = {Alvbrant, Joakim and Keshmiri, Vahid and Wikner, Jacob},
title = {{Transfer Characteristics and Bandwidth Limitation in a Linear-Drift Memristor Model}},
booktitle = {2015 EUROPEAN CONFERENCE ON CIRCUIT THEORY AND DESIGN (ECCTD)},
year = {2015},
pages = {332--335},
publisher = {IEEE},
}
Color interpolation is still the most used method for image upsampling, since it offers the simplest and therefore fastest algorithms. However, in recent years research concentrated on other techniques to counter the shortcomings of interpolation techniques (like color artifacts or the fact that interpolation does not take statistics into account), while interpolation itself has ceased to be an active research topic. Still, current interpolation techniques can be improved. Especially it should be possible to avoid color artifacts by carefully choosing the correct interpolation schemes. In this paper we derive mathematical constraints which need to be fulfilled to reach an artifact-free interpolation, and use these to develop an interpolation method which is basically a self-configuring cubic spline.
@inproceedings{diva2:971470,
author = {Ogniewski, Jens},
title = {{Artifact-Free Color Interpolation}},
booktitle = {Proceedings SCCG: 2015 31st Spring Conference on Computer Graphics},
year = {2015},
pages = {116--119},
publisher = {ASSOC COMPUTING MACHINERY},
}
In recent years, short-term single-object tracking has emerged has a popular research topic, as it constitutes the core of more general tracking systems. Many such tracking methods are based on matching a part of the image with a template that is learnt online and represented by, for example, a correlation filter or a distribution field. In order for such a tracker to be able to not only find the position, but also the scale, of the tracked object in the next frame, some kind of scale estimation step is needed. This step is sometimes separate from the position estimation step, but is nevertheless jointly evaluated in de facto benchmarks. However, for practical as well as scientific reasons, the scale estimation step should be evaluated separately – for example,theremightincertainsituationsbeothermethodsmore suitable for the task. In this paper, we describe an evaluation method for scale estimation in template-based short-term single-object tracking, and evaluate two state-of-the-art tracking methods where estimation of scale and position are separable.
@inproceedings{diva2:853786,
author = {Ahlberg, Jörgen and Berg, Amanda},
title = {{Evaluating Template Rescaling in Short-Term Single-Object Tracking}},
booktitle = {17th IEEE International Workshop on Performance Evaluation of Tracking and Surveillance (PETS), Karlsruhe, Germany, August 25, 2015},
year = {2015},
publisher = {IEEE},
}
Architecture on demand (AoD) node offers considerable flexibility against traditional ROADMs. The paper presents a cost-efficient network planning strategy that exploits the flexibility inherent in AoD. Results show that AoD can save significantly in node modules through a proper network design.
@inproceedings{diva2:861943,
author = {Muhammad, Ajmal and Zervas, Georgios S. and Amaya, Norberto and Simeonidou, Dimitra and Forchheimer, Robert},
title = {{Cost-Efficient Design of Flexible Optical Networks Implemented by Architecture on Demand}},
booktitle = {2014 OPTICAL FIBER COMMUNICATIONS CONFERENCE AND EXHIBITION (OFC)},
year = {2014},
pages = {W2A.17--},
publisher = {IEEE},
}
This study looks into network planning issues for synthetic MCF-based SDM networks implemented through programmable ROADMs. The results show that significant savings in switching modules and energy can be attained by exploiting the flexibility inherent in programmable ROADM through a proper network design.
@inproceedings{diva2:797271,
author = {Ajmal, Muhammad and Zervas, Georgios and Saridis, George and Salas, Emilio H. and Simeonidou, Dimitra and Forchheimer, Robert},
title = {{Flexible and Synthetic SDM Networks with Multi-core-Fibers Implemented by Programmable ROADMs}},
booktitle = {Proceedings of European Conference on Optical Communication ECOC2014, Cannes, France, September 21-25 September 2014},
year = {2014},
pages = {1--3},
publisher = {IEEE},
}
Survivable synthetic ROADMs are equipped with redundant switching modules to support failure recovery. The paper proposes a dynamic connection provisioning strategy which exploits these idle redundant modules to provision regular traffic resulting in a substantial improvement in the blocking performance.
@inproceedings{diva2:797259,
author = {Ajma, Muhammad and Furdek, Marija and Monti, Paolo and Wosinska, Lena and Forchheimer, Robert},
title = {{Dynamic provisioning utilizing redundant modules in elastic optical networks based on architecture on demand nodes}},
booktitle = {European Conference on Optical Communication (ECOC), 2014},
year = {2014},
pages = {1--3},
publisher = {IEEE},
}
We investigate benefits of setup-delay tolerance in elastic optical networks and propose an optimization model for dynamic and concurrent connection provisioning. Simulation shows that the proposed strategy offers significant improvement of the network blocking performance.
@inproceedings{diva2:797246,
author = {Ajmal, Muhammad and Furdek, Marija and Monti, Paolo and Wosinska, Lena and Forchheimer, Robert},
title = {{An Optimization Model for Dynamic Bulk Provisioning in Elastic Optical Networks}},
booktitle = {Asia Communications and Photonics Conference 2014},
year = {2014},
pages = {AF3E.6--},
publisher = {Optics Info Base, Optical Society of America},
}
District heating pipes are known to degenerate with time and in some cities the pipes have been used for several decades. Due to bad insulation or cracks, energy or media leakages might appear. This paper presents a complete system for large-scale monitoring of district heating networks, including methods for detection, classification and temporal characterization of (potential) leakages. The system analyses thermal infrared images acquired by an aircraft-mounted camera, detecting the areas for which the pixel intensity is higher than normal. Unfortunately, the system also finds many false detections, i.e., warm areas that are not caused by media or energy leakages. Thus, in order to reduce the number of false detections we describe a machine learning method to classify the detections. The results, based on data from three district heating networks show that we can remove more than half of the false detections. Moreover, we also propose a method to characterize leakages over time, that is, repeating the image acquisition one or a few years later and indicate areas that suffer from an increased energy loss.
@inproceedings{diva2:776415,
author = {Berg, Amanda and Ahlberg, Jörgen},
title = {{Classification and temporal analysis of district heating leakages in thermal images}},
booktitle = {Proceedings of The 14th International Symposium on District Heating and Cooling},
year = {2014},
}
We address the problem of reducing the number offalse alarms among automatically detected leakages in districtheating networks. The leakages are detected in images capturedby an airborne thermal camera, and each detection correspondsto an image region with abnormally high temperature. Thisapproach yields a significant number of false positives, and wepropose to reduce this number in two steps. First, we use abuilding segmentation scheme in order to remove detectionson buildings. Second, we extract features from the detectionsand use a Random forest classifier on the remaining detections.We provide extensive experimental analysis on real-world data,showing that this post-processing step significantly improves theusefulness of the system.
@inproceedings{diva2:776248,
author = {Berg, Amanda and Ahlberg, Jörgen},
title = {{Classification of leakage detections acquired by airborne thermography of district heating networks}},
booktitle = {2014 8th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)},
year = {2014},
series = {IAPR Workshop on Pattern Recognition in Remote Sensing},
pages = {1--4},
publisher = {IEEE},
}
Space division multiplexing (SDM) over multi-core fiber (MCF) is advocated as a promising technology to overcome the capacity limit of the current single-core optical networks. However, employing the MCF for flexgrid networks necessitates the development of new concepts, such as routing, spectrum and core allocation (RSCA) for traffic demands. The introduction of MCF in the networks mitigates the spectrum continuity constraint of the routing and spectrum assignment (RSA) problem. In fact cores can be switched freely on different links during routing of the network traffic. Similarly, the route disjointness for demands with same allocated spectrum diminishes to core disjointness at the link level. On the other hand, some new issues such as the inter-core crosstalk should be taken into account while solving the RSCA problem. This paper formulates the RSCA network planning problem using the integer linear programming (ILP) formulation. The aim is to optimally minimize the maximum number of spectrum slices required on any core of MCF of a flexgrid SDM network. Furthermore, a scalable and effective heuristic is proposed for the same problem and its performance is compared with the optimal solution. The results show that the proposed algorithm is able to well approximate the optimal solution based on ILP model.
@inproceedings{diva2:765548,
author = {Ajmal, Muhammad and Zervas, Georgios and Simeonidou, Dimitra and Forchheimer, Robert},
title = {{Routing, Spectrum and Core Allocation in Flexgrid SDM Networks with Multi-core Fibers}},
booktitle = {2014 INTERNATIONAL CONFERENCE ON OPTICAL NETWORK DESIGN AND MODELING},
year = {2014},
pages = {192--197},
publisher = {IEEE},
}
In this paper we present a 2D extension of a previously described 1D method for a time-to-impact sensor [5][6]. As in the earlier paper, the approach is based on measuring time instead of the apparent motion of points in the image plane to obtain data similar to the optical flow. The specific properties of the motion field in the time-to-impact application are used, such as using simple feature points which are tracked from frame to frame. Compared to the 1D case, the features will be proportionally fewer which will affect the quality of the estimation. We give a proposal on how to solve this problem. Results obtained are as promising as those obtained from the 1D sensor.
@inproceedings{diva2:717957,
author = {Anders, Åström and Forchheimer, Robert},
title = {{A High Speed 2D Time-to-Impact Algorithm Targeted for Smart Image Sensors}},
booktitle = {\emph{Proc. SPIE} 9022, Image Sensors and Imaging Systems 2014},
year = {2014},
series = {Proceedings of SPIE},
volume = {9022},
publisher = {International Society for Optical Engineering; 1999},
}
We discuss how the GLUT library can be modified to suit the needs of modern OpenGL, what parts can be excluded or need redesigning. Based on these results, we present a new cross-platform user interface framework for OpenGL, called MicroGlut, which covers the most vital features. We also draft how interesting features currently excluded from MicroGlut could be added.
@inproceedings{diva2:1129515,
author = {Ragnemalm, Ingemar},
title = {{A User Interface Framework for Modern OpenGL Based on the OpenGL Utility Toolkit}},
booktitle = {Computer Graphics, Visualization, Computer Vision and Image Processing 2013},
year = {2013},
pages = {151--154},
}
This paper presents new radix-2 and radix-22 constant geometry fast Fourier transform (FFT) algorithms for graphics processing units (GPUs). The algorithms combine the use of constant geometry with special scheduling of operations and distribution among the cores. Performance tests on current GPUs show a significant improvements compared to the most recent version of NVIDIA’s well-known CUFFT, achieving speedups of up to 5.6x.
@inproceedings{diva2:927926,
author = {Ambuluri, Sreehari and Garrido, Mario and Caffarena, Gabriel and Ogniewski, Jens and Ragnemalm, Ingemar},
title = {{New Radix-2 and Radix-2$^{2}$ Constant Geometry Fast Fourier Transform Algorithms For GPUs}},
booktitle = {IADIS Computer Graphics, Visualization, Computer Vision and Image Processing},
year = {2013},
pages = {59--66},
}
We propose a connection provisioning strategy in dynamic all-optical networks, which exploit the possibility to allow a tolerable signal quality degradation during a small fraction of holding-time resulting in a significant improvement of blocking performance.
@inproceedings{diva2:797231,
author = {Ajmal, Muhammad and Cavdar, Cicek and Wosinska, Lena and Forchheimer, Robert},
title = {{Trading Quality of Transmission for Improved Blocking Performance in All-Optical Networks}},
booktitle = {Asia Communications and Photonics Conference 2013},
year = {2013},
pages = {AF4E.5--},
}
Emerging, on-demand applications (e.g., ultra-high definition TV, grid computing, e-health, digital cinema, etc.) will dominate the next generation optical networks. Dynamic on-demand provisioning of optical channels is advocated as a promising solution to fulfil the high bandwidth requirements of these applications. Among the various on-line strategies proposed to provision such applications, the ones exploiting the connection holding-time knowledge and the flexibility provided by the set-up delay tolerance exhibited a good potential in improving the overall network blocking performance. Considering the fact that the users are willing to wait for provisioning until their set-up delay tolerance expire, it is not of prime importance to establish connection on the earliest available resources. Rather, effective results in terms of network blocking performance can be achieved by provisioning connections on optimal instead of earliest available resources that exist within the set-up delay tolerance. This paper presents novel connection scheduling strategies that efficiently exploit the set-up delay tolerance and holding-time knowledge for dynamic wavelength division multiplexing (WDM) networks. The proposed scheme computes the set of all available provisioning opportunities (at different time instants) within set-up delay tolerance and selects the one that is most efficient in terms of network resource utilization. This scheme is investigated for two different scenarios, that are, for connection requests that cannot be provisioned at the time of their arrival due to resource unavailability and for every request irrespective of whether the required resources for set-up are available or not. Simulation results confirm that the proposed strategies attain significant improvement in network blocking probability compared to earlier presented techniques.
@inproceedings{diva2:715702,
author = {Muhammad, Ajmal and Forchheimer, Robert},
title = {{Efficient Scheduling Strategies for Dynamic WDM Networks with Set-Up Delay Tolerance}},
booktitle = {15th International Conference on Transparent Optical Networks (ICTON 2013), 23-27 June 2013, Cartagena, Spain},
year = {2013},
pages = {1--4},
publisher = {IEEE},
}
A terminal charge and capacitance model is developed for transient behavior simulation of electrolyte-gated organic field effect transistors (EGOFETs). Based on the Ward-Dutton partition scheme, the charge and capacitance model is derived from our drain current model reported previously. The transient drain current is expressed as the sum of the initial drain current and the charging current, which is written as the product of the partial differential of the terminal charges with respect to the terminal voltages and the differential of the terminal voltages upon time. The validity for this model is verified by experimental measurements.
@inproceedings{diva2:925853,
author = {Tu, Deyu and Kergoat, Loïg and Crispin, Xavier and Berggren, Magnus and Forchheimer, Robert},
title = {{Transient analysis of electrolyte-gated organic field effect transistors}},
booktitle = {SPIE Proceedings Vol. 8478},
year = {2012},
series = {Proceedings of SPIE},
pages = {84780L-1--84780L-8},
}
Face tracking is an extensively studied field. Nevertheless, it is still a challenge to make a robust and efficient face tracker, especially on mobile devices. This extended abstract briefly describes our implementation of a high-performance multi-platform face and facial feature tracking system. The main characteristics of our approach are that the tracker is fully automatic and works with the majority of faces without any manual initialization. It is robust, resistant to rapid changes in pose and facial expressions, does not suffer from drifting and is modestly computationally expensive. The tracker runs in real-time on mobile devices.
@inproceedings{diva2:845459,
author = {Marku\v{s}, Nenad and Frljak, Miroslav and Pandži\'{c}, Igor and Ahlberg, Jörgen and Forchheimer, Robert},
title = {{High-performance face tracking}},
booktitle = {ACM 3rd International Symposium on Facial Analysis and Animation},
year = {2012},
}
The discussion on noncontextual hidden variable models as an underlying description for the quantum-mechanical predictions started in ernest with 1967 paper by Kochen and Specker. There, it was shown that no noncontextual hidden-variable model can give these predictions. The proof used in that paper is complicated, but recently, a paper by Yu and Oh [PRL, 2012] proposes a simpler statistical proof that can also be the basis of an experimental test. Here we report on a sharper version of that statistical proof, and also explain why the algebraic upper bound to the expressions used are not reachable, even with a reasonable contextual hidden variable model. Specifically, we show that the quantum mechanical predictions reach the maximal possible value for a contextual model that keeps the expectation value of the measurement outcomes constant.
@inproceedings{diva2:642283,
author = {Larsson, Jan-Åke and Kleinmann, Matthias and Budroni, Constantino and Guehne, Otfried and Cabello, Adán},
title = {{Maximal violation of state-independent contextuality inequalities}},
booktitle = {Quantum Theory: Reconsideration of Foundations - 6},
year = {2012},
series = {AIP Conference Proceedings},
volume = {1508},
pages = {265--274},
publisher = {American Institute of Physics (AIP)},
}
Emerging, on-demand applications (e.g., Interactive video, ultra-high definition TV, backup storage and grid computing) are gaining momentum and are becoming increasingly important. Given the high bandwidth required by these applications, Wavelength Division Multiplexing (WDM) networks are seen as the natural choice for their transport technology. Among the various on-line strategies proposed to provision such services, the ones based on service level agreement (SLA) metrics such as setup delay tolerance and connection holding-time awareness showed a good potential in improving the overall network blocking performance. However, in a scenario where connection requests are grouped in different service classes, the provisioning success rate might be unbalanced towards those connection requests with less stringent requirements, i.e., not all the connection requests are treated in a fair way. This paper addresses the problem of how to guarantee the signal quality and the fair provisioning of different service classes, where each class corresponds to a specified target of quality of transmission (QoT). With this objective in mind three fair scheduling algorithms are proposed in a dynamic traffic scenario, each one combining in a different way the concept of both set-up delay tolerance and connection holding-time awareness. Proposed solutions are specifically taylored to facilitate the provisioning of the most stringent service class so as to balance the success rate among the different classes. Simulation results confirm that the proposed approaches are able to guarantee a fair treatment reaching up to 99% in terms of Jain's fairness index, considering the per-class success ratio, without compromising the improvements in terms of overall network blocking probability.
@inproceedings{diva2:571445,
author = {Muhammad, Ajmal and Cavdar, Cicek and Monti, Paolo},
title = {{Fair Scheduling of Dynamically Provisioned WDM Connections with Differentiated Signal Quality}},
booktitle = {Proceedings of the 16th International Conference on Optical Network Design and Modeling (ONDM), 2012},
year = {2012},
pages = {1--6},
publisher = {IEEE},
}
Universal hash function based multiple authentication was originally proposed by Wegman and Carter in 1981. In this authentication, a series of messages are authenticated by first hashing each message by a fixed (almost) strongly universal$_2$ hash function and then encrypting the hash value with a preshared one-time pad. This authentication is unconditionally secure. In this paper, we show that the unconditional security cannot be guaranteed if the hash function output for the first message is not encrypted, as remarked in [Atici and Stinson, CRYPTO '96. LNCS, vol. 1109]. This means that it is not only sufficient, but also necessary, to encrypt the hash of every message to be authenticated in order to have unconditional security. The security loss is demonstrated by a simple existential forgery attack.
@inproceedings{diva2:561456,
author = {Abidin, Aysajan},
title = {{On Security of Universal Hash Function Based Multiple Authentication}},
booktitle = {Lecture Notes in Computer Science, Vol. 7618},
year = {2012},
series = {Lecture Notes in Computer Science},
volume = {7618},
pages = {303--310},
}
Universal hash functions are important building blocks for unconditionally secure message authentication codes. In this paper, we present a new construction of a class of Almost Strongly Universal hash functions with much smaller description (or key) length than the Wegman-Carter construction. Unlike some other constructions, our new construction has a very short key length and a security parameter that is independent of the message length, which makes it suitable for authentication in practical applications such as Quantum Cryptography.
@inproceedings{diva2:561455,
author = {Abidin, Aysajan and Larsson, Jan-Åke},
title = {{New Universal Hash Functions}},
booktitle = {Lecture Notes in Computer Science, Vol. 7242},
year = {2012},
series = {Lecture Notes in Computer Science},
volume = {7242},
pages = {99--108},
publisher = {Springer Berlin Heidelberg},
}
With the emergence of bandwidth intensive applications new methodologies need to be developed for improvement of network blocking performance, without supplying extra resources in dynamic wavelength-division-multiplexing (WDM) networks. Rerouting is one among the viable and cost-effective solutions to reduce the blocking probability (BP) of optical WDM networks. Similarly, set-up delay tolerance, a metric of service level agreement (SLA) has been exploited in [1]-[3] for improvement in network BP. In this paper, we study the rerouting in dynamic WDM network and analyze two different lightpath rerouting strategies. Moreover, we investigate further improvement in network BP by exploiting these rerouting techniques for network provisioned with set-up delay tolerance. Through extensive simulation studies, we confirm that the rerouting strategies decrease the BP substantially for network provisioned with set-up delay tolerance, even for smaller set-up delay tolerance value i.e. when the connection requests are impatient to wait for provisioning in the network. However, it also reduce BP significantly even when the network is not provisioned with set-up delay tolerance.
@inproceedings{diva2:571462,
author = {Muhammad, Ajmal and Forchheimer, Robert},
title = {{Reducing Blocking Probability in Dynamic WDM Networks by Rerouting and set-up Delay Tolerance}},
booktitle = {17th IEEE International Conference on Networks (ICON), 2011},
year = {2011},
series = {IEEE International Conference on Networks},
volume = {2011},
pages = {195--200},
publisher = {IEEE},
}
We study a dynamic WDM network with nonideal components in the physical layer which uses an impairment-aware routing and wavelength (RWA) algorithm for connection provisioning. We investigate the reduction in blocking probability (BP) by utilizing service Level Agreement (SLA) metric i.e. setup delay tolerance during connection provisioning. Furthermore, we explore the improvement in the network performance by efficiently utilizing the knowledge of the connections holding-time, another metric of SLA. Keeping in mind that BP reduction can be obtained by set-up delay tolerance [1] our focus is to investigate how set-up delay tolerance combined with holding-time awareness can improve BP performance caused by physical impairment. Our simulation results confirm that significant improvement can be achieved by holding-time aware connection provisioning compared to the unaware holding-time case. Moreover as expected, set-up delay tolerance can reduce BP even without knowledge of connections holding-time.
@inproceedings{diva2:571440,
author = {Muhammad, Ajmal and Forchheimer, Robert and Wosinska, Lena},
title = {{Impairment-Aware Dynamic Provisioning in WDM Networks with set-up Delay Tolerance and Holding-time Awareness}},
booktitle = {Proceedings of the 17th IEEE International Conference on Networks (ICON), 2011},
year = {2011},
series = {IEEE International Conference on Networks},
volume = {2011},
pages = {213--218},
publisher = {IEEE},
}
strategies. First we analyze a strategy which effectively utilizes two Service Level Agreement (SLA) metrics i.e. holding-time combined with set-up delay tolerance, for connection provisioning. We investigate its performance improvement compared to the scheme proposed in [4] and with the strategy which exploits only the set-up delay tolerance for connection provisioning. Secondly, we evaluate the performance of the various approaches for network blocking reduction over a wide range of network loads. Our aim is to obtain an insight information, that can be useful for selection of an optimal strategy for designing a network, with specific network parameters and performance requirements.
@inproceedings{diva2:571352,
author = {Muhammad, Ajmal and Forchheimer, Robert},
title = {{Reducing Blocking Probability in Dynamic WDM Networks Using Different Schemes}},
booktitle = {Proceedings 2011 International Conferenceon the Network of the Future, 28-30 November, 2011 Paris, France},
year = {2011},
pages = {97--101},
publisher = {IEEE},
}
We study a dynamic WDM network supporting different service classes (SC) containing applications having similar setup delay tolerance. By utilizing delay tolerance we propose scheduling strategies able to significantly reduce blocking probability of each SC.
@inproceedings{diva2:571346,
author = {Muhammad, Ajmal and Cavdar, Cicek and Wosinska, Lena and Forchheimer, Robert},
title = {{Effect of Delay Tolerance in WDM Networks with Differentiated Services}},
booktitle = {Optical Fiber Communication Conference and Exposition (OFC/NFOEC), 2011 and the National Fiber Optic Engineers Conference},
year = {2011},
pages = {1--3},
publisher = {IEEE},
}
In this paper, we present a model-based video coding method that uses input from colour and depth cameras, such as the Microsoft Kinect. The model-based approach uses a 3D representation of the scene, enabling several other applications besides video playback. Some of these applications are stereoscopic viewing, object insertion for augmented reality and free viewpoint viewing. The video encoding step uses computer vision to estimate the camera motion. The scene geometry is represented by keyframes, which are encoded as 3D quadsusing a quadtree, allowing good compression rates. Camera motion in-between keyframes is approximated to be linear. The relative camera positions at keyframes and the scene geometry are then compressed and transmitted to the decoder. Our experiments demonstrate that the model-based approach delivers a high level of detail at competitively low bitrates.
@inproceedings{diva2:525249,
author = {Sandberg, David and Forss\'{e}n, Per-Erik and Ogniewski, Jens},
title = {{Model-Based Video Coding using Colour and Depth Cameras}},
booktitle = {Digital Image Computing},
year = {2011},
pages = {158--163},
publisher = {IEEE},
}
Quantum Key Distribution (QKD - also referred to as Quantum Cryptography) is a technique for secret key agreement. It has been shown that QKD rigged with Information-Theoretic Secure (ITS) authentication (using secret key) of the classical messages transmitted during the key distribution protocol is also ITS. Note, QKD without any authentication can trivially be broken by man-in-the-middle attacks. Here, we study an authentication method that was originally proposed because of its low key consumption; a two-step authentication that uses a publicly known hash function, followed by a secret strongly universal2 hash function, which is exchanged each round. This two-step authentication is not information-theoretically secure but it was argued that nevertheless it does not compromise the security of QKD. In the current contribution we study intrinsic weaknesses of this approach under the common assumption that the QKD adversary has access to unlimited resources including quantum memories. We consider one implementation of Quantum Cryptographic protocols that use such authentication and demonstrate an attack that fully extract the secret key. Even including the final key from the protocol in the authentication does not rule out the possibility of these attacks. To rectify the situation, we propose a countermeasure that, while not informationtheoretically secure, restores the need for very large computing power for the attack to work. Finally, we specify conditions that must be satisfied by the two-step authentication in order to restore informationtheoretic security.
@inproceedings{diva2:515405,
author = {Abidin, Aysajan and Pacher, Christoph and Lorünser, Thomas and Larsson, Jan-Åke and Peev, Momtchil},
title = {{Quantum cryptography and authentication with low key-consumption}},
booktitle = {Proceedings of SPIE - The International Society for Optical Engineering},
year = {2011},
series = {Proceedings of SPIE},
volume = {8189},
pages = {818916--},
}
We present a methodology to extract parameters for an electrolyte-gated organic field effect transistor DC model. The model is based on charge drift/diffusion transport under electric field and covers all regimes. Voltage dependent capacitance, mobility, contact resistance and threshold voltage shift are taken into account in this model. The feature parameters in the model are simply extracted from the transfer or output characteristics of electrolyte-gated organic field effect transistors. The extracted parameters are verified by good agreements between experimental and simulated results.
@inproceedings{diva2:471797,
author = {Tu, Deyu and Forchheimer, Robert and Herlogsson, Lars and Crispin, Xavier and Berggren, Magnus},
title = {{Parameter extraction for electrolyte-gated organic field effect transistor modeling}},
booktitle = {20th European Conference on Circuit Theory and Design (ECCTD)},
year = {2011},
pages = {853--856},
publisher = {IEEE conference proceedings},
}
Recent months have seen the introduction of several solutions to introduce stereoscopic vision to computer games. Although the technical aspects are all well understood, design approaches taking human factors and view quality into account have yet to be developed. This paper gives a short introduction to the area, an overview of the already published work in the area of autostereoscopic vision for both computer games and video, as well as useful hints for the game designer and a few first design priciples.
@inproceedings{diva2:438159,
author = {Ogniewski, Jens},
title = {{MAXIMIZING USER COMFORT \& IMMERSION: A GAME DESIGNERS GUIDE TO 3D DISPLAYS}},
booktitle = {Game and Entertainment Technologies 2011},
year = {2011},
pages = {145--148},
}
More embedded systems gain increasing multimedia capabilities, including computer graphics. Although this is mainly due to their increasing computational capability, optimizations of algorithms and data structures are important as well, since these systems have to fulfill a variety of constraints and cannot be geared solely towards performance. In this paper, the two most popular texture compression methods (DXT1 and PVRTC) are compared in both image quality and decoding performance aspects. For this, both have been ported to the ePUMA platform which is used as an example of energy consumption optimized embedded systems. Furthermore, a new DXT1 encoder has been developed which reaches higher image quality than existing encoders.
@inproceedings{diva2:437292,
author = {Ogniewski, Jens and Karlsson, Andr\'{e}as and Ragnemalm, Ingemar},
title = {{TEXTURE COMPRESSION IN MEMORY AND PERFORMANCE-CONSTRAINED EMBEDDED SYSTEMS}},
booktitle = {Computer Graphics, Visualization, Computer Vision and Image Processing 2011},
year = {2011},
pages = {19--26},
}
The ePUMA architecture is a novel parallel architecture being developed as a platform for low-power computing, typically for embedded or hand-held devices. As part of the exploration of the platform, we have implemented the Euclidean Distance Transform. We outline the ePUMA architecture and describe how the algorithm was implemented.
@inproceedings{diva2:437278,
author = {Ragnemalm, Ingemar and Karlsson, Andr\'{e}as},
title = {{Computing The Euclidean Distance Transform on the ePUMA Parallel Hardware}},
booktitle = {Computer Graphics, Visualization, Computer Vision and Image Processing 2011},
year = {2011},
pages = {228--232},
}
We derive a Gaussian approximation of the LLR distribution conditioned on the transmitted signal and the channel matrix for the soft-output via partial marginalization MIMO detector. This detector performs exact ML as a special case. Our main results consist of discussing the operational meaning of this approximation and a proof that, in the limit of high SNR, the LLR distribution of interest converges in probability towards a Gaussian distribution.
@inproceedings{diva2:389872,
author = {\v{C}irki\'{c}, Mirsad and Persson, Daniel and Larsson, Erik G. and Larsson, Jan-Åke},
title = {{Gaussian Approximation of the LLR Distribution for the ML and Partial Marginalization MIMO detectors}},
booktitle = {Proceedings of the IEEE International Conference on Acoustics, Speech and SignalProcessing (ICASSP)},
year = {2011},
series = {IEEE International Conference on Acoustics, Speech and SignalProcessing},
pages = {3232--3235},
publisher = {IEEE conference proceedings},
}
We present a project in development, an integrated development environment (IDE) for Mac OS X named Lightweight
IDE. The design breaks with the trend of making user interfaces more and more complex, in order to produce a system
which is fast to learn and comfortable to use. The project has two goals: To produce a usable system for education and
hobbyists, as well as exploring advantages of a tight user interface for this particular kind of applications. The system has
been implemented and is fully usable for serious development. It has been evaluated in a limited user study.
@inproceedings{diva2:402835,
author = {Ragnemalm, Ingemar},
title = {{Minimalism for usability; The design of a programming development system with a minimalistic user interface}},
booktitle = {INTERFACES AND HUMAN COMPUTER INTERACTION 2010},
year = {2010},
pages = {446--448},
}
The ePUMA architecture is a novel parallel architecture being developed as a platform for low-power computing,
typically for embedded or hand-held devices. It was originally designed for radio baseband processors for hand-held
devices and for radio base stations. It has also been adapted for executing high definition video CODECs. In this paper,
we investigate the possibilities and limitations of the platform for real-time graphics, with focus on hand-held gaming.
@inproceedings{diva2:402833,
author = {Ragnemalm, Ingemar and Liu, Dake},
title = {{Towards using the ePUMA architecture for hand-held video games}},
booktitle = {COMPUTER GRAPHICS, VISUALIZATION, COMPUTER VISION AND IMAGE PROCESSING 2010},
year = {2010},
pages = {380---384},
}
This paper discusses energy-time entanglement experiments and their relation to Einstein-Podolsky-Rosen (EPR) elements of reality. The interferometric experiment proposed by J. D. Franson in 1989 provides the background, and the main issue here is a detailed discussion on whether a Local Realist model can give the Quantum-Mechanical predictions for this setup. The Franson interferometer gives the same interference pattern as the usual Bell experiment (modulo postselection). Even so, depending on the precise requirements made on the Local Realist model, this can imply a) no violation, b) smaller violation than usual, or c) full violation of the appropriate statistical bound. This paper discusses what requirements are necessary on the model to reach a violation, and the motivation for making these requirements. The alternatives include using a) only the measurement outcomes as EPR elements of reality, b) the emission time as EPR element of reality, and c) path realism. The subtleties of this discussion needs to be taken into account when designing and setting up future experiments of this kind, intended to test Local Realism.
@inproceedings{diva2:344883,
author = {Larsson, Jan-Åke},
title = {{Energy-time entanglement, Elements of Reality, and Local Realism}},
booktitle = {QUANTUM THEORY: RECONSIDERATION OF FOUNDATIONS - 5},
year = {2010},
series = {AIP Conference Proceedings},
volume = {1232},
pages = {115--127},
publisher = {American Institute of Physics (AIP)},
}
We compare the performance of two real-time multimedia communication systems for quality versus end-to-end delay. We develop an analytical framework for comparison when the systems use a deterministic time-varying channel. Moreover, we assess their performance for the Gilbert-Elliott channel model which alternates between a good and a bad state with time durations that are exponentially distributed. The goal of the paper is to select the best system with low average distortion while obeying a real-time constraint.
@inproceedings{diva2:415217,
author = {Muhammad, Ajmal and Johansson, Peter and Forchheimer, Robert},
title = {{Effect of Buffer Placement on Performance When Communicating Over a Rate-Variable Channel}},
booktitle = {ICSNC 2009},
year = {2009},
publisher = {IEEE, 345 E 47TH ST, NEW YORK, NY 10017 USA},
}
Recent years have seen a lot of work on local descriptors. In all published comparisons or evaluations, the now quite well-known SIFT-descriptor has been one of the top performers. For the application of object pose estimation, one comparison showed a local descriptor, called the Patch-Duplet, of equal or better performance than SIFT. This paper examines different properties of those two descriptors by forming hybrids between them and extending the object pose tests of the original Patch-Duplet paper. All tests use real images.
@inproceedings{diva2:318015,
author = {Viksten, Fredrik},
title = {{Object Pose Estimation using Patch-Duplet/SIFT Hybrids}},
booktitle = {Proceedings of the 11th IAPR Conference on Machine Vision Applications},
year = {2009},
pages = {134--137},
address = {Tokyo, Japan},
}
This year in Växjö we thought we would try an experiment—it felt high timefor a new result. Much of the foundations discussion ofprevious years has focussed on EPR-style arguments and the meaningand experimental validity of various Bell inequality violations. Yet, thereis another pillar of the quantum foundations puzzle that hashardly received any attention in our great series of meetings:It is the phenomenon first demonstrated by Kochen and Specker,quantum contextuality. Recently there has been a rapid growth ofactivity aimed toward better understanding this aspect of quantum mechanics,which Asher Peres sloganized by the phrase, “unperformed experiments haveno results.” Below is a sampling of some important paperson the topic for the reader not yet familiar withthe subject. What is the source of this phenomenon? Doesit depend only on high level features of quantum mechanics,or is it deep in the conceptual framework on whichthe theory rests? Might it, for instance, arise from theway quantum mechanics amends the classic laws of probability? Whatare the mathematically simplest ways contextuality can be demonstrated? Howmight the known results be made amenable to experimental tests?These were the sorts of discussions we hoped the sessionwould foster.
@inproceedings{diva2:284422,
author = {Fuchs, Christopher and Larsson, Jan-Åke},
title = {{Foreword: Unperformed experiments have no results}},
booktitle = {FOUNDATIONSOF PROBABILITYAND PHYSICS - 5},
year = {2009},
series = {AIP Conference Proceedings},
volume = {1101},
pages = {221--222},
publisher = {American Institute of Physics (AIP)},
address = {Melville, New York},
}
@inproceedings{diva2:281634,
author = {Cabello, Adan and Larsson, Jan-Åke and Rodriguez, David},
title = {{Eficiencia crítica para las desigualdades encadenadas de Braunstein y Caves}},
booktitle = {XXXII Bienal de la Real Sociedad Española de Física (Ciudad Real, 7 al 11 de Septiembre de 2009), Real Sociedad Española de Física, Madrid, 2009, 529},
year = {2009},
}
This paper considers approximations of marginalization sums thatarise in Bayesian inference problems. Optimal approximations ofsuch marginalization sums, using a fixed number of terms, are analyzedfor a simple model. The model under study is motivated byrecent studies of linear regression problems with sparse parametervectors, and of the problem of discriminating signal-plus-noise samplesfrom noise-only samples. It is shown that for the model understudy, if only one term is retained in the marginalization sum, thenthis term should be the one with the largest a posteriori probability.By contrast, if more than one (but not all) terms are to be retained,then these should generally not be the ones corresponding tothe components with largest a posteriori probabilities.
@inproceedings{diva2:246025,
author = {Axell, Erik and Larsson, Erik G. and Larsson, Jan-Åke},
title = {{On the Optimal K-term Approximation of a Sparse Parameter Vector MMSE Estimate}},
booktitle = {Proceedings of the 2009 IEEE Workshop on Statistical Signal Processing (SSP'09)},
year = {2009},
pages = {245--248},
publisher = {IEEE},
}
State of the art and coming hyperspectral optical sensors generate large amounts of data and automatic analysis is necessary. One example is Automatic Target Recognition (ATR), frequently used in military applications and a coming technique for civilian surveillance applications. When sensors communicate in networks, the capacity of the communication channel defines the limit of data transferred without compression. Automated analysis may have different demands on data quality than a human observer, and thus standard compression methods may not be optimal. This paper presents results from testing how the performance of detection methods are affected by compressing input data with COTS coders. A standard video coder has been used to compress hyperspectral data. A video is a sequence of still images, a hybrid video coder use the correlation in time by doing block based motion compensated prediction between images. In principle only the differences are transmitted. This method of coding can be used on hyperspectral data if we consider one of the three dimensions as the time axis. Spectral anomaly detection is used as detection method on mine data. This method finds every pixel in the image that is abnormal, an anomaly compared to the surroundings. The purpose of anomaly detection is to identify objects (samples, pixels) that differ significantly from the background, without any a priori explicit knowledge about the signature of the sought-after targets. Thus the role of the anomaly detector is to identify “hot spots” on which subsequent analysis can be performed. We have used data from Imspec, a hyperspectral sensor. The hyperspectral image, or the spectral cube, consists of consecutive frames of spatial-spectral images. Each pixel contains a spectrum with 240 measure points. Hyperspectral sensor data was coded with hybrid coding using a variant of MPEG2. Only I- and P- frames was used. Every 10th frame was coded as P frame. 14 hyperspectral images was coded in 3 different directions using x, y, or z direction as time. 4 different quantization steps were used. Coding was done with and without initial quantization of data to 8 bbp. Results are presented from applying spectral anomaly detection on the coded data set.
@inproceedings{diva2:241777,
author = {Linderhed, Anna and Wadströmer, Niclas and Stenborg, Karl-Göran and Nautsch, Harald},
title = {{Compression of Hyperspectral data for Automated Analysis}},
booktitle = {SPIE Europe Remote Sensing 2009},
year = {2009},
}
Secure message authentication is an important part of Quantum Key Distribution. In this paper we analyze special properties of a Strongly Universal2 hash function family, an understanding of which is important in the security analysis of the authentication used in Quantum Cryptography. We answer the following question: How much of Alices message does Eve need to influence so that the message along with its tag will give her enough information to create the correct tag for her message?
@inproceedings{diva2:221270,
author = {Abidin, Aysajan and Larsson, Jan-Åke},
title = {{Special Properties of Strongly Universal$_{2}$ Hash Functions Important in Quantum Cryptography}},
booktitle = {AIP Conference Proceedings, ISSN 0094-243X, Foundations of Probability and Physics--5, Växjö, augusti 2008},
year = {2009},
pages = {289--293},
publisher = {American Institute of Physics},
address = {New York},
}
The Kochen-Specker paradox has recently been subject to experimental interest, and in this situation the number of steps in the proof in question is important. The fewer number of steps there are in the proof, the more imperfections can be tolerated in the experimental setup. In the spin-1 version of the Kochen-Specker paradox, when the settings used are directions in three-dimensional space, the proofs can be easily visualized and the steps can easily be counted. In particular, the original Kochen-Specker paradox makes use of so-called great-circle descents. Here, we will examine such descents in detail and also some other versions of the proof for spin-1 systems. We will see that, perhaps contrary to intuition, the proofs that use a small number of steps do not in general use only great-circle descents, and examine the reasons for this and possible extensions. At least one new proof will also be presented for the spin-1 case.
@inproceedings{diva2:221269,
author = {Larsson, Jan-Åke},
title = {{The Kochen-Specker Paradox and Great-Circle Descents}},
booktitle = {Foundations of Probability and Physics--5},
year = {2009},
series = {AIP Conference Proceedings},
volume = {1101},
pages = {280--286},
publisher = {American Institute of Physics (AIP)},
address = {Melville, NY, USA},
}
Point-of-interest detection is a way of reducing the amount of data that needs to be processed in a certain application and is widely used in 2D image analysis. In 2D image analysis, point-of-interest detection is usually related to extraction of local descriptors for object recognition, classification, registration or pose estimation. In analysis of range data however, some local descriptors have been published in the last decade or so, but most of them do not mention any kind of point-of-interest detection. We here show how to use an extended Harris detector on range data and discuss variants of the Harris measure. All described variants of the Harris detector for 3D should also be usable in medical image analysis, but we focus on the range data case. We do present a performance evaluation of the described variants of the Harris detector on range data.
@inproceedings{diva2:265790,
author = {Viksten, Fredrik and Nordberg, Klas and Kalms, Mikael},
title = {{Point-of-Interest Detection for Range Data}},
booktitle = {International Conference on Pattern Recognition (ICPR)},
year = {2008},
series = {Pattern Recognition},
pages = {1--4},
publisher = {IEEE},
}
This work examines the possiblity to, with the computational power of today’s consumer hardware, employ techniques previously developed for 3D tracking of rigid objects, and use them for tracking of deformable objects. Our target objects are human faces in a video conversation pose, and our purpose is to create a deformable face tracker based on a head tracker operating in real-time on consumer hardware. We also investigate how to combine model-based and image based tracking in order to get precise tracking and avoid drift.
@inproceedings{diva2:846266,
author = {Ingemars, Nils and Ahlberg, Jörgen},
title = {{Feature-based Face Tracking using Extended Kalman Filtering}},
booktitle = {Swedish Symposium on Image Analysis (SSBA), Linköping, Sweden, 14-15 mars 2007},
year = {2007},
publisher = {Swedish Society for automated image analysis},
}
This is an account of an attempt to improve students- communicative skills, with a focus on mathematics. The intent is to give the students skill and experience in communicating in an environment where precision is important, both in mathematics and science in general, but also in engineering. The first part of the course is intended to improve the students- ability to follow alogical argument, especially long (even infinite) chains of logical arguments. Later parts of the course focus more on the practice of presentation of, discussion of, and writing mathematics. Examination is not by a written exam, the examination consists of students' participation in oralpresentations and the ensuing discussions, a one-pagehandwritten hand-in at the start of the course, and finally a short typed piece on a suitable mathematical topic. Experiences from this first attempt are discussed, and the most striking effect is the visibly improving oral communication skills of the students as the course proceeds. There are also indications that participation in thiscourse is beneficial to later mathematics courses, but only for the able students. We do expect an improved overall performance of the students but there is no clear effect as yet, partly because there has not passed enough (read -any-) time after the finished course, but perhaps also because thesample is small.
@inproceedings{diva2:259478,
author = {Larsson, Jan-Åke},
title = {{Communication of Mathematics' as a tool to improve students' general communicative skills}},
booktitle = {The 3rd International CDIO Conference,,2007},
year = {2007},
publisher = {MIT},
address = {Cambridge, MA},
}
This paper makes an attempt to present Quantum Mechanics in simple terms, but without oversimplification. It will be in essence, a comparison of the usual understanding of randomness to the more difficult notion of "quantumness." The restricted format of this paper will unfortunately force the presentation to be very terse. But perhaps this paper can be seen as a synopsis of the approach I have in mind.
@inproceedings{diva2:263899,
author = {Larsson, Jan-Åke},
title = {{The quantum and the random:
Similarities, differences, and "contradictions"}},
booktitle = {Quantum Theory: Reconsideration of Foundations,2005},
year = {2006},
series = {AIP Conference Proceeding},
pages = {353--359},
publisher = {American Institute of Physics (AIP)},
address = {Melville, NY, USA},
}
@inproceedings{diva2:258019,
author = {Larsson, Jan-Åke and Cederlöf, Jörgen},
title = {{Security aspects of the authentication used in quantum key growing}},
booktitle = {Foundations of Probability and Physics,2006},
year = {2006},
}
@inproceedings{diva2:258010,
author = {Larsson, Jan-Åke and Cederlöf, Jörgen},
title = {{Security aspects of the authentication used in quantum key growing}},
booktitle = {Advanced Free-Space Optical Communication Techniques/Applications III,2006},
year = {2006},
series = {Proceedings of SPIE - International Society for Optical Engineering},
volume = {6399},
pages = {63990H--},
publisher = {SPIE - International Society for Optical Engineering},
address = {Bellingham, WA ,USA},
}
The electrochemical transistor is presented from a functional point-of-view. It is shown that this transistor has characteristics that are similar to p-channel depletion-mode MOSFET devices. Electrical design rules for proper operation are given. Based on these rules, we show how logical circuits such as inverters and gates can be constructed.
@inproceedings{diva2:471833,
author = {Nilsson, David and Forchheimer, Robert and Berggren, Magnus and Robinson, Nathaniel},
title = {{The electrochemical transistor and circuit design considerations}},
booktitle = {Proceedings of the 2005 European Conference on Circuit Theory and Design},
year = {2005},
pages = {III/349--III/352},
publisher = {IEEE conference proceedings},
}
@inproceedings{diva2:402836,
author = {Ragnemalm, Ingemar and Hjelm Andersson, Patrik},
title = {{Shape matching on the Euclidean Distance Transform}},
booktitle = {\emph{Swedish Symposium on Image Analysis}},
year = {2005},
pages = {21--24},
}
@inproceedings{diva2:252708,
author = {Larsson, Jan-Åke and Gill, Richard D},
title = {{Bell's inequality and the coincidence-time loophole}},
booktitle = {Foundations of probability and physics,2004},
year = {2005},
pages = {228--},
publisher = {American Institute of Physics},
address = {New York},
}
We introduce theconcept of mobility-based communication in ad hoc networks, meaningthat the packet transport is performed mainly by the nodes'movement. We outline a model for such networks, utilizing astochastic model for the geographical location of the nodes. A testcase is defined in which three strategies for packet forwarding arepresented and evaluated.
@inproceedings{diva2:252465,
author = {Löfvenberg, Jacob and Johansson, Peter and Forchheimer, Robert},
title = {{A Model for Mobility-Based Communication in Ad Hoc Networks}},
booktitle = {The Third Swedish National Computer Networking Workshop,2005},
year = {2005},
}
A new approach to low redundancy coding for reducing power dissipation in parallel on-chip, deep sub-micron buses is presented. It is shown that the new approach allows lower power dissipation than previous solutions in the given model, yielding reductions of 24% to 41% compared to uncoded transmission for the considered bus widths. Finally some important open problems are given.
@inproceedings{diva2:252462,
author = {Löfvenberg, Jacob and Lindgren, Tina},
title = {{Minimal Redundancy, Low Power Bus Coding}},
booktitle = {23rd NORCHIP Conference,2005},
year = {2005},
pages = {277--},
publisher = {University Oulu},
address = {Oulu, Finland},
}
In a typicalbroadcast encryption scenario, a sender wishes to securely transmitmessages to a subset of receivers, the intended set, using abroadcast channel. Several schemes for broadcast encryption existand they allow the sender to reach a privileged set of receiversand by the use of encryption block all others from receiving themessage.Most of the existing broadcast encryption literature assumes thatthe intended set and the privileged set are equal but this is notalways necessary. In some applications a slight difference betweenthe intended and the privileged set may be tolerated if the cost oftransmitting the message decreases sufficiently. It has beensuggested that a few free-riders, users not in the intended set butin the privileged set, may be allowed in some scenarios. Inrare cases the opposite could also be possible, that is some usersare in the intended set but not in the privileged set.Our approach is to use the information theoretic concept ofdistortion to measure the discrepancy between the intended and theprivileged sets. As a cost measure we use the average number oftransmissions required to send one message.As an example of the use for these measures we have developed threesimple algorithms that aim to lower the cost by adding somedistortion; one greedy algorithm and two versions of an algorithmbased on randomness. By simulations we have compared them using ourcost and distortion measures. The subset difference (SD) schemehas been used as the underlying broadcast encryption scheme. Thegreedy algorithm is not tightly bound to the SD scheme while thetwo randomness-based algorithms take some use of the properties ofthe SD scheme.
@inproceedings{diva2:250478,
author = {Anderson, Kristin},
title = {{Cost-Distortion Measures for Broadcast Encryption}},
booktitle = {NordSec 2005. Student session,2005},
year = {2005},
}
In this paper weconsider the subset difference scheme for broadcast encryption andcount the number of required broadcast transmissions when usingthis scheme. The subset difference scheme organizes receivers in atree structure and we note that isomorphic trees yield the samenumber of required broadcast transmissions. Based on theisomorphism the trees can be partitioned into classes. We suggestto use the vast amount of tools available from the theory of groupsto analyze the subset difference scheme and therefore we formulatethe mappings between isomorphic trees using concepts from grouptheory. Finally we identify some research issues for further studyof the performance of the subset difference scheme using grouptheory.
@inproceedings{diva2:249541,
author = {Anderson, Kristin and Claesson, Fredrik and Löfvenberg, Jacob and Ingemarsson, Ingemar},
title = {{The Algebraic Structure of a Broadcast Encryption Scheme}},
booktitle = {Radiovetenskap och Kommunikation, RVK05,2005},
year = {2005},
}
We consider the broadcast encryption problem where one sender wishes to transmit messages securely to a selected set of receivers using a broadcast channel, as is the case in digital television for example. Specifically, we study the subset difference scheme for broadcast encryption and the number of broadcast transmissions required when using this scheme. The effects of adjacency in the user set are considered and we introduce the notion of transitions in the user set as a means to quantify the adjacency. We present upper and lower bounds for the number of transmissions based on the number of transitions between privileged and nonprivileged users in the user set. For cases where the privileged users are gathered in a few groups we derive the maximum number of transmissions.
@inproceedings{diva2:249540,
author = {Anderson, Kristin and Claesson, Fredrik and Löfvenberg, Jacob},
title = {{Effects of User Adjacency in the Subset Difference Scheme for Broadcast Encryption}},
booktitle = {Radiovetenskap och Kommunikation, RVK05,2005},
year = {2005},
}
Using a newly introduced alternative to a conventional SRAM cell a binary zero can be written with a much lower power consumption than a binary one. Such a solution reduces power consumption, especially if there are few ones in the data, that is, if the data has a low Hamming weight. If the data is not inherently of low weight, this can beachieved by encoding the data. In the paper such coding isdiscussed and in small cases energy efficient encoding and decodingrealizations are presented.
@inproceedings{diva2:249539,
author = {Löfvenberg, Jacob},
title = {{Coding circuits for reducing Hamming weight}},
booktitle = {Radiovetenskap och Kommunikation, RVK05,2005},
year = {2005},
}
We present two coding techniques for reducing the power dissipation in deep sub-micron, parallel data buses. The techniques differ in their parameter values and are suitable in different scenarios. In both cases typical reduction in power dissipation is 20%.
@inproceedings{diva2:249538,
author = {Löfvenberg, Jacob and Gustafsson, Oscar and Johansson, Kenny and Lindkvist, Tina and Ohlsson, Henrik and Wanhammar, Lars},
title = {{Coding schemes for deep sub-micron data buses}},
booktitle = {Radiovetenskap och Kommunikation, RVK05,2005},
year = {2005},
}
We discuss codingfor deep sub-micron buses with highly sequential data, the typicalapplication being address buses, and we note that coding techniquesspecifically targetted at this application are considerably betterthan general techniques. Previously proposed coding schemes aredescribed and a new, non-redundant coding technique with a verysmall realization and more than 50% reduction of power dissipationis presented.
@inproceedings{diva2:249537,
author = {Löfvenberg, Jacob},
title = {{Coding schemes for highly sequential data in deep sub-micron buses}},
booktitle = {Radiovetenskap och Kommunikation, RVK05,2005},
year = {2005},
}
Watermarking embeds a signature in digital documents such as video and can be used for discarding illegal copying. Individual watermarking means that every customer receives an individual document that can be traced back to the customer if he is a pirate. Multicast is a way to distribute the same digital object to many users without the need to send one specific copy to each user. To combine distribution of individually watermarked documents with multicast might at first glance seem to be impossible. During the last years some methods have been developed to achieve that type of scalable distribution of individually watermarked documents. Some of these methods use special overlay networks to embed the signature and other use cryptography to produce the watermarks.
@inproceedings{diva2:249535,
author = {Stenborg, Karl-Göran},
title = {{Multicast Distribution of Video using Individual Watermarks}},
booktitle = {RadioVetenskap och Kommunikation,2005},
year = {2005},
pages = {539--},
publisher = {FOI},
address = {Linköping},
}
We consider theproblem of choosing the rate for a source coder when transmittinglossy-compressed data in real-time over a channel with time-varyingrate. The goal for the rate selection is to obtain a low averagedistortion while obeying a real-time constraint. We formulate thereal-time constraint in terms of a limited buffer size. A fewstrategies for rate-control are suggested and evaluated for aGilbert-Elliott type channel model. The results are also comparedto a theoretical upper bound on performance for a rate-controlalgorithm working with constraints on buffer size.
@inproceedings{diva2:244506,
author = {Johansson, Peter and Forchheimer, Robert},
title = {{Source Coding Rate Control for Gilbert-Elliott Channel}},
booktitle = {RVK 05,2005},
year = {2005},
pages = {543--},
publisher = {FOI},
address = {Linköping},
}
In this paper we present a simplified model of parallel, on-chip buses, motivated by the movement toward CMOS technologies where the ratio between inter-wire capacitance and wire-to-ground capacitance is very large. We also introduce a ternary bus state representation, suitable for the bus model. Using this representation we propose a coding scheme without memory which reduces energy dissipation in the bus model by approximately 20-30% compared to an uncoded system. At the same time the proposed coding scheme is easy to realize, in terms of standard cells needed, compared to several previously proposed solutions.
@inproceedings{diva2:243214,
author = {Lindkvist, Tina and Löfvenberg, Jacob and Ohlsson, Henrik and Johansson, Kenny and Wanhammar, Lars},
title = {{A Power-Efficient, Low-Complexity, Memoryless Coding Scheme for Buses with Dominating Inter-Wire Capacitances}},
booktitle = {IEEE International Workshop on System on Chip for Real-Time Applications,2004},
year = {2004},
pages = {257--},
publisher = {IEEE Computer Society},
address = {Los Alamitos, California, USA},
}
In this paper we present a simplified model for deep sub-micron, on-chip, parallel data buses. Using this model a coding technique similar to Bus Invert Coding is presented, but with a better performance in the proposed model. The coding technique can be realized using low-complexity encoding and decoding circuitry, and with a complexity that scales linearly with the bus width. Simulation results show that the energy dissipation decreases with approximately 20% for buses with up to 16 wires.
@inproceedings{diva2:243219,
author = {Lindkvist, Tina and Löfvenberg, Jacob and Gustafsson, Oscar},
title = {{Deep Sub-Micron Bus Invert Coding}},
booktitle = {Proceedings of the 6th Nordic Signal Processing Symposium, 2004. NORSIG 2004.},
year = {2004},
pages = {133--136},
publisher = {University of Technology},
address = {Helsinki},
}
A coding technique for deep sub-micron address buses with inter-wire capacitances dominating the wire-to-ground capacitances is presented. This code is similar to Gray codes, in the sense that it defines an ordering of the binary space, such that adjacent codewords dissipate little energy when sent consecutively. The ordering is shown to be close to optimal, as to the energy dissipation, when sending the whole sequence in order. A circuit diagram realizing the coder is presented, using only n-1 two-input gates, where n is the bus width. Simulations show an improvement in energy dissipation of more than 50% over an uncoded bus in several cases, depending on the data being coded.
@inproceedings{diva2:243209,
author = {Löfvenberg, Jacob},
title = {{Non-Redundant Coding for Deep Sub-Micron Address Buses}},
booktitle = {IEEE International Workshop on System on Chip for Real-Time Applications,2004},
year = {2004},
pages = {275--},
publisher = {IEEE Computer Societiy},
address = {Los Alamitos, California, USA},
}
In this paper, we consider the potentialities of adapting a 3D deformable face model to video sequences. Two adaptation methods are proposed. The first method computes the adaptation using a locally exhaustive and directed search in the parameter space. The second method decouples the estimation of head and facial feature motion. It computes the 3D head pose by combining: (i) a robust feature-based pose estimator, and (ii) a global featureless criterion. The facial animation parameters are then estimated with a combined exhaustive and directed search. Tracking experiments and performance evaluation demonstrate the feasibility and usefulness of the developed methods. These experiments also show that the proposed methods can outperform the adaptation based on a directed continuous search.
@inproceedings{diva2:849732,
author = {Dornaika, Fadi and Ahlberg, Jörgen},
title = {{Face Model Adaptation for Tracking and Active Appearance Model Training}},
booktitle = {Proceedings of the British Machine Vision Conference},
year = {2003},
pages = {57.1--57.10},
}
We address the 3D tracking of pose and animation of the human face in monocular image sequences using active appearance models. The classical appearance-based tracking suffers from two disadvantages: (i) the estimated out-of-plane motions are not very accurate, and (ii) the convergence of the optimization process to desired minima is not guaranteed. We aim at designing an efficient active appearance model, which is able to cope with the above disadvantages by retaining the strengths of feature-based and featureless tracking methodologies. For each frame, the adaptation is split into two consecutive stages. In the first stage, the 3D head pose is recovered using robust statistics and a measure of consistency with a statistical model of a face texture. In the second stage, the local motion associated with some facial features is recovered using the concept of the active appearance model search. Tracking experiments and method comparison demonstrate the robustness and out-performance of the developed framework.
@inproceedings{diva2:849731,
author = {Ahlberg, Jörgen and Dornaika, Fadi},
title = {{Efficient active appearance model for real-time head and facial feature tracking}},
booktitle = {Analysis and Modeling of Faces and Gestures, 2003. AMFG 2003. IEEE International Workshop on},
year = {2003},
pages = {173--180},
publisher = {IEEE conference proceedings},
}
@inproceedings{diva2:243613,
author = {Larsson, Jan-Åke},
title = {{Bell Inequalities for Position Measurements}},
booktitle = {Quantum Theory-reconsideration of foundations-2,2003},
year = {2003},
}
This paper presents new methods for use of dense motion fields for motion compensation of interlaced video. The motion is estimated using previously decoded field-images. An initial motion compensated prediction is produced using the assumption that the motion is predictable in time. The motion estimation algorithm is phase-based and uses two or three field-images to achieve motion estimates with sub-pixel accuracy. To handle non-constant motion and the specific characteristics of the field-image to be coded, the initially predicted image is refined using forward motion compensation, based on block-matching. Tests show that this approach achieves higher PSNR than forward block-based motion estimation, when coding the residual with the same coder. The subjective performance is also better.
@inproceedings{diva2:244840,
author = {Andersson, Kenneth and Johansson, Peter and Forchheimer, Robert and Knutsson, Hans},
title = {{Backward-forward motion compensated prediction}},
booktitle = {Proceedings of ACIVS 2002 (Advanced Concepts for Intelligent Vision Systems), Ghent, Belgium, September 9-11, 2002},
year = {2002},
pages = {260--267},
}
A probabilistic version of the Kochen-Specker paradox is presented. The paradox is restated in the form of an inequality relating probabilities from a non-contextual hidden-variable model, by formulating the concept of "probabilistic contextuality." This enables an experimental test for contextuality at low experimental error rates. Using the assumption of independent errors, an explicit error bound of 0.71% is derived, below which a Kochen-Specker contradiction occurs.
@inproceedings{diva2:259442,
author = {Larsson, Jan-Åke},
title = {{A probabilistic inequality for the Kochen-Specker paradox}},
booktitle = {Foundations of Probability and Physics,2000},
year = {2001},
pages = {236--245},
publisher = {World Scientific},
address = {Singapore},
}
Theses
The protection of confidential data is a fundamental need in the society in which we live. This task becomes more relevant when observing that every day, data traffic increases exponentially, as well as the number of attacks on the telecommunication infra-structure. From the natural sciences, it has been strongly argued that quantum communication has great potential to solve this problem, to such an extent that various governmental and industrial entities believe the protection provided by quantum communications will be an important layer in the field of information security in the next decades. However, integrating quantum technologies both in current optical networks and in industrial systems is not a trivial task, taking into account that a large part of current quantum optical systems are based on bulk optical devices, which could become an important limitation. Throughout this thesis we present an all-in-fiber optical platform that allows a wide range of tasks that aim to take a step forward in terms of generation and detection of photonic states. Among the main features, the generation and detection of photonic quantum states carrying orbital angular momentum stand out.
The platform can also be configured for the generation of random numbers from quantum mechanical measurements, a central aspect in future information tasks.
Our scheme is based on the use of new space-division-multiplexing (SDM) technologies such as few-mode-fibers and photonic lanterns. Furthermore, our platform can also be scaled to high dimensions, it operates in 1550 nm (telecommunications band) and all the components used for its implementation are commercially available. The results presented in this thesis can be a solid alternative to guarantee the compatibility of new SDM technologies in emerging experiments on optical networks and open up new possibilities for quantum communication.
@phdthesis{diva2:1797425,
author = {Alarcón, Alvaro},
title = {{All-Fiber System for Photonic States Carrying Orbital Angular Momentum:
A Platform for Classical and Quantum Information Processing}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 2340}},
year = {2023},
address = {Sweden},
}
Society as we know it today would not have been possible without the explosive and astonishing development of telecommunications systems, and optical fibers have been one of the pillars of these technologies.
Despite the enormous amount of data being transmitted over optical networks today, the trend is that the demand for higher bandwidths will also increase. Given this context, a central element in the design of telecommunications networks will be data security, since information can often be confidential or private.
Quantum information emerges as a solution to encrypt data by quantum key distribution (QKD) between two users. This technique uses the properties of nature as the fundamentals of operation rather than relying on mathematical constructs to provide data protection. A popular alternative to performing QKD is to use the relative phase between two individual photon paths for information encoding. However, this method was not practical over long distances. The time-bin- based scheme was a solution to the previous problem given its practical nature, however, it introduces intrinsic losses due to its design, which increases with the dimension of the encoded quantum system.
In this thesis we have designed and tested a fiber-optic platform using spatial-division- multiplexing techniques. The use of few-mode fibers and photonic lanterns are the cornerstone of our proposal, which also allow us to support orbital angular momentum (OAM) modes. The platform builds on the core ideas of the phase-coded quantum communication system and also takes advantage of the benefits proposed by the time-bin scheme. We have experimentally tested our proposal by successfully transmitting phase-coded single-photon states over 500 m few-mode fiber, demonstrating the feasibility of our scheme. We demonstrated the successful creation of OAM states, their propagation and their successful detection in an all in-fiber scheme. Our platform eliminates the post-selection losses of time-bin quantum communication systems and ensures compatibility with next-generation optical networks and opens up new possibilities for quantum communication.
@phdthesis{diva2:1653777,
author = {Alarcón Cuevas, Alvaro},
title = {{A Few-Mode-Fiber Platform for Quantum Communication Applications}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Licentiate Thesis No. 1935}},
year = {2022},
address = {Sweden},
}
In this thesis we address the question, what is the resource, or property, that enables the advantage of quantum computers? The theory of quantum computers dates back to the eighties, so one would think there already is an answer to this question. There are several proposed solutions, but to this date, there is no consensus on an answer.
Primarily, the advantage of quantum computers is characterized by a speedup for certain computational problems. This speedup is measured by comparing quantum algorithms with the best-known classical algorithms. For some algorithms we assume access to an object called oracle. The oracle computes a function, and the complexity of the oracle is of no concern. Instead, we count the number of queries to the oracle needed to solve the problem. Informally, the question we ask using an oracle is: if we can compute this function efficiently, what else could we then compute. However, using oracles while measuring a quantum speedup, we assume access to vastly different oracles residing in different models of computation.
For our investigation of the speedup, we introduce a classical simulation framework that imitates quantum algorithms. The simulation suggests that the property enabling the potential quantum speedup is the ability to store, process, and retrieve information in an additional degree of freedom. We then theoretically verified that this is true for all problems that can be efficiently solved with a quantum computer.
In parallel to this, we also see that quantum oracles sharply specify the information we can retrieve from the additional degree of freedom, while regular oracles do not. A regular oracle does not even allow for an extra degree of freedom. We conclude that comparing quantum with classical oracle query complexity bounds does not provide conclusive evidence for a quantum advantage.
@phdthesis{diva2:1612116,
author = {Johansson, Niklas},
title = {{A Resource for Quantum Computation}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 2191}},
year = {2021},
address = {Sweden},
}
Recent years have seen the advent of RGB+D video (color+depth video), which enables new applications like free-viewpoint video, 3D and virtual reality. This is however achieved by adding additional data, thus increasing the bitrate. On the other hand, the added geometrical data can be used for more accurate frame prediction, thus decreasing bitrate. Modern encoders use previously decoded frames to predict other ones, meaning they only need to encode the difference. When geometrical data is available, previous frames can instead be projected to the frame that is currently predicted, thus reaching a higher accuracy and a higher compression.
In this thesis, different techniques are described and evaluated enabling such a prediction scheme based on projecting from depth-images, so called depth-image based rendering (DIBR). A DIBR method is found that maximizes image quality, in terms of minimizing the differences of the projected frame to the groundtruth of the frame it was projected to, i.e. the frame that is to be predicted. This was achieved by evaluating combinations of both state-of-the-art methods for DIBR as well as own extensions, meant to solve artifacts that were discovered during this work. Furthermore, a real-time version of this DIBR method is derived and, since the deph-maps will be compressed as well, the impact of depth-map compression on the achieved projection quality is evaluated, for different compression methods including novel extensions of existing methods. Finally, spline methods are derived for both geometrical and color interpolation.
Although all this was done with a focus on video compression, many of the presented methods are useful for other applications as well, like free-viewpoint video or animation.
@phdthesis{diva2:1371394,
author = {Ogniewski, Jens},
title = {{Interpolation Techniques with Applications in Video Coding}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Licentiate Thesis No. 1858}},
year = {2019},
address = {Sweden},
}
Quantum computation solve some computational problems faster than the best-known alternative in classical computation. The evidence for this consists of examples where a quantum algorithm outperforms the best-known classical algorithm. A large body of these examples relies on oracle query complexity, where the performance (complexity) of the algorithms is measured by the number of times they need to access an oracle. Here, an oracle is usually considered to be a black box that computes a specific function at unit cost.
However, the quantum algorithm is given access to an oracle with more structure than the classical algorithm. This thesis argues that the two oracles are so vastly different that comparing quantum and classical query complexity should not be considered evidence, but merely hints for a quantum advantage.
The approach used is based on a model that can be seen as an approximation of quantum theory, but can be efficiently simulated on a classical computer. This model solves several oracular problems with the same performance as their quantum counterparts, showing that there is no genuine quantum advantage for these problems. This approach also clarifies the assumptions made in quantum computation, and which properties that can be seen as resources in these algorithms.
@phdthesis{diva2:1260724,
author = {Johansson, Niklas},
title = {{On the Power of Quantum Computation: Oracles}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Licentiate Thesis No. 1823}},
year = {2018},
address = {Sweden},
}
In this thesis we study device-independent quantum key distribution based on energy-time entanglement. This is a method for cryptography that promises not only perfect secrecy, but also to be a practical method for quantum key distribution thanks to the reduced complexity when compared to other quantum key distribution protocols. However, there still exist a number of loopholes that must be understood and eliminated in order to rule out eavesdroppers. We study several relevant loopholes and show how they can be used to break the security of energy-time entangled systems. Attack strategies are reviewed as well as their countermeasures, and we show how full security can be re-established.
Quantum key distribution is in part based on the profound no-cloning theorem, which prevents physical states to be copied at a microscopic level. This important property of quantum mechanics can be seen as Nature's own copy-protection, and can also be used to create a currency based on quantummechanics, i.e., quantum money. Here, the traditional copy-protection mechanisms of traditional coins and banknotes can be abandoned in favor of the laws of quantum physics. Previously, quantum money assumes a traditional hierarchy where a central, trusted bank controls the economy. We show how quantum money together with a blockchain allows for Quantum Bitcoin, a novel hybrid currency that promises fast transactions, extensive scalability, and full anonymity.
@phdthesis{diva2:1150887,
author = {Jogenfors, Jonathan},
title = {{Breaking the Unbreakable:
Exploiting Loopholes in Bell's Theorem to Hack Quantum Cryptography}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1875}},
year = {2017},
address = {Sweden},
}
Optical communication networks are considered the main catalyst for the transformation of communication technology, and serve as the backbone of today's Internet. The inclusion of exciting technologies, such as, optical amplifiers, wavelength division multiplexing (WDM), and reconfigurable optical add/drop multiplexers (ROADM) in optical networks have made the cost of information transmission around the world negligible. However, to maintain the cost effectiveness for the growing bandwidth demand, facilitate faster provisioning, and provide richer sets of service functionality, optical networks must continue to evolve. With the proliferation of cloud computing the demand for a promptly responsive network has increased. Moreover, there are several applications, such as, real time multimedia services that can become realizable, depending on the achievable connection set-up time.
Given the high bandwidth requirements and strict service level specifications (SLSs) of such applications, dynamic on-demand WDM networks are advocated as a first step in this evolution. SLSs are metrics of a service level agreement (SLA), which is a contract between a customer and network operator. Apart from the other candidate parameters, the set-up delay tolerance, and connection holding-time have been defined as metrics of SLA. Exploiting these SLA parameters for on-line provisioning strategies exhibits a good potential in improving the overall network blocking performance. However, in a scenario where connection requests are grouped in different service classes, the provisioning success rate might be unbalanced towards those connection requests with less stringent requirements, i.e., not all the connection requests are treated in a fair way.
The first part of this thesis focuses on different scheduling strategies for promoting the requests belonging to smaller set-up delay tolerance service classes. The first part also addresses the problem of how to guarantee the signal quality and the fair provisioning of different service classes, where each class corresponds to a specified target of quality of transmission. Furthermore, for delay impatient applications the thesis proposes a provisioning approach, which employs the possibility to tolerate a slight degradation in quality of transmission during a small fraction of the holding-time.
The next essential phase for scaling system capacity and satisfying the diverse customer demands is the introduction of flexibility in the underlying technology. In this context, the new optical transport networks, namely elastic optical networks (EON) are considered as a worthwhile solution to efficiently utilize the available spectrum resources. Similarly, space division multiplexing (SDM) is envisaged as a promising technology for the capacity expansion of future networks. Among the alternative for flexible nodes, the architecture on demand (AoD) node has the capability to dynamically adapt its composition according to the switching and processing needs of the network traffic.
The second part of this thesis investigates the benefits of set-up delay tolerance for EON by proposing an optimization model for dynamic and concurrent connection provisioning. Furthermore, it also examines the planning aspect for flexible networks by presenting strategies that employ the adaptability inherent in AoD. Significant reduction in switching devices is attainable by proper planning schemes that synthesized the network by allocating switching device where and when needed while maximizing fiber switching operation. In addition, such a design approach also reduces the power consumption of the network. However, cost-efficient techniques in dynamic networks can deteriorate the network blocking probability owing to insufficient number of switching modules. For dynamic networks, the thesis proposes an effective synthesis provisioning scheme along with a technique for optimal placement of switching devices in the network nodes.
The network planning problem is further extended to multi-core-fiber (MCF) based SDM networks. The proposed strategies for SDM networks aim to establish the connections through proper allocation of spectrum and core while efficiently utilizing the spectrum resources. Finally, the optimal planning strategy for SDM networks is tailored to fit synthetic AoD based networks with the goal to optimally build each node and synthesize the whole network with minimum possible switching resources.
@phdthesis{diva2:797310,
author = {Muhammad, Ajmal},
title = {{Planning and Provisioning Strategies for Optical Core Networks}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1645}},
year = {2015},
address = {Sweden},
}
Quantum key distribution (QKD) is an application of quantum mechanics that allowstwo parties to communicate with perfect secrecy. Traditional QKD uses polarization of individual photons, but the development of energy-time entanglement could lead to QKD protocols robust against environmental effects. The security proofs of energy-time entangled QKD rely on a violation of the Bell inequality to certify the system as secure. This thesis shows that the Bell violation can be faked in energy-time entangled QKD protocols that involve a postselection step, such as Franson-based setups. Using pulsed and phase-modulated classical light, it is possible to circumvent the Bell test which allows for a local hidden-variable model to give the same predictions as the quantum-mechanical description. We show that this attack works experimentally and also how energy-time-entangled systems can be strengthened to avoid our attack.
@phdthesis{diva2:786875,
author = {Jogenfors, Jonathan},
title = {{A Classical-Light Attack on Energy-Time Entangled Quantum Key Distribution, and Countermeasures}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1709}},
year = {2015},
address = {Sweden},
}
Automatic synthesis of facial animation in Computer Graphics is a challenging task and although the problem is three decades old by now, there is still not a unified method to solve it. This is mainly due to the complex mathematical model required to reproduce the visual meanings of facial expressions coupled with the computational speed needed to run interactive applications.In this thesis, there are two different proposed methods to address the problem of the realistic animation of 3D virtual faces at interactive rate.
The first method is an integrated physically-based method which mimics the facial movements by reproducing the musculoskeletal structure of a human head and the interaction among the bony structure, the facial muscles and the skin. Differently from previously proposed approaches in the literature, the muscles are organized in a layered, interweaving structure laying on the skull; their shape is affected both by the simulation of active contraction and by the motion of the underlying anatomical parts. A design tool has been developed in order to assist the user in defining the muscles in a natural manner by sketching their shape directly on top of the already existing bones and other muscles. The dynamics ofthe face motion is computed through a position-based schema ensuring real-time performance, control and robustness. Experiments demonstrate that through this model it is possible to effectively synthesize realistic expressive facial ani-mation on different input face models in real-time on consumer class platforms.
The second method for automatically achieving animation consists of a novel facial motion cloning technique. This is a purely geometric algorithm and it is able to transfer the motion from an animated source face to a different target face mesh, initially static, allowing to reuse facial motion from already animated virtual heads. Its robustness and flexibility are assessed over several input datasets.
@phdthesis{diva2:646028,
author = {Fratarcangeli, Marco},
title = {{Computational Models for Animating 3D Virtual Faces}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1610}},
year = {2013},
address = {Sweden},
}
Quantum Key Distribution (QKD) is a secret key agreement technique that consists of two parts: quantum transmission and measurement on a quantum channel, and classical post-processing on a public communication channel. It enjoys provable unconditional security provided that the public communication channel is immutable. Otherwise, QKD is vulnerable to a man-in-the-middle attack. Immutable public communication channels, however, do not exist in practice. So we need to use authentication that implements the properties of an immutable channel as well as possible. One scheme that serves this purpose well is the Wegman-Carter authentication (WCA), which is built upon Almost Strongly Universal2 (ASU2) hashing. This scheme uses a new key in each authentication attempt to select a hash function from an ASU2 family, which is then used to generate the authentication tag for a message.
The main focus of this dissertation is on authentication in the context of QKD. We study ASU2 hash functions, security of QKD that employs a computationally secure authentication, and also security of authentication with a partially known key. Specifically, we study the following.
First, Universal hash functions and their constructions are reviewed, and as well as a new construction of ASU2 hash functions is presented. Second, security of QKD that employs a specific computationally secure authentication is studied. We present detailed attacks on various practical implementations of QKD that employs this authentication. We also provide countermeasures and prove necessary and sufficient conditions for upgrading the security of the authentication to the level of unconditional security. Third, Universal hash function based multiple authentication is studied. This uses a fixed ASU2 hash function followed by one-time pad encryption, to keep the hash function secret. We show that the one-time pad is necessary in every round for the authentication to be unconditionally secure. Lastly, we study security of the WCA scheme, in the case of a partially known authentication key. Here we prove tight information-theoretic security bounds and also analyse security using witness indistinguishability as used in the Universal Composability framework.
@phdthesis{diva2:616704,
author = {Abidin, Aysajan},
title = {{Authentication in Quantum Key Distribution:
Security Proof and Universal Hash Functions}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1517}},
year = {2013},
address = {Sweden},
}
The size, complexity, and the amount of traffic generation of optical communication networks have dramatically increased over the last decades. Exciting technologies, namely, optical amplifiers, wavelength division multiplexing (WDM) and optical filters have been included in optical networks, in order to fulfill end users appetite for bandwidth. However, the users high bandwidth demand will further increase with time, as emerging on-demand bandwidth intensive applications are starting to dominate the networks. Applications such as interactive video, ultra-high definition TV, backup storage, grid computing, e-science, e-health to mention a few, are becoming increasingly attractive and important for the community. Given the high bandwidth requirements and strict service level specifications (SLSs) of such applications, WDM networks equipped with agile devices, such as reconfigurable optical add-drop multiplexers and tunable transceivers integrated with G-MPLS/ASON control-plane technology are advocated as a natural choice for their implementation. SLSs are metrics of a service level agreement (SLA), which is a contract between a customer and network operator. Apart from other candidate parameters, the set-up delay tolerance and connection holding-time have been defined as metrics of SLA.
This work addresses the network connections provisioning problem for the above mentioned demanding applications, by exploiting the time dimension of connections request. The problem is investigated for dynamic networks comprising ideal and nonideal components in their physical layer, and for applications with differentiated set-up delay tolerance and quality of signal requirements. Various strategies for different scenarios are proposed, each strategy combining in a different way the concept of both set-up delay tolerance and connection holding-time awareness. The objectives of all these strategies are to enhance the network connections provisioning capability and to fulfill customers demand, by utilizing the network resources efficiently.
@phdthesis{diva2:561890,
author = {Muhammad, Ajmal},
title = {{Connections Provisioning Strategies for dynamic WDM networks}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1557}},
year = {2012},
address = {Sweden},
}
Robotic automation has been a part of state-of-the-art manufacturing for many decades. Robotic manipulators are used for such tasks as welding, painting, pick and place tasks etc. Robotic manipulators are quite flexible and adaptable to new tasks, but a typical robot-based production cell requires extensive specification of the robot motion and construction of tools and fixtures for material handling. This incurs a large effort both in time and monetary expenses. The task of a vision system in this setting is to simplify the control and guidance of the robot and to reduce the need for supporting material handling machinery.
This dissertation examines performance and properties of the current state-of-the-art local features within the setting of object pose estimation. This is done through an extensive set of experiments replicating various potential problems to which a vision system in a robotic cell could be subjected. The dissertation presents new local features which are shown to increase the performance of object pose estimation. A new local descriptor details how to use log-polar sampled image patches for truly rotational invariant matching. This representation is also extended to use a scale-space interest point detector which in turn makes it very competitive in our experiments. A number of variations of already available descriptors are constructed resulting in new and competitive features, among them a scale-space based Patch-duplet.
In this dissertation a successful vision-based object pose estimation system is extended for multi-cue integration, yielding increased robustness and accuracy. Robustness is increased through algorithmic multi-cue integration, combining the individual strengths of multiple local features. Increased accuracy is achieved by utilizing manipulator movement and applying temporal multi-cue integration. This is implemented using a real flexible robotic manipulator arm.
Besides work done on local features for ordinary image data a number of local features for range data has also been developed. This dissertation describes the theory behind and the application of the scene tensor to the problem of object pose estimation. The scene tensor is a fourth order tensor representation using projective geometry. It is shown how to use the scene tensor as a detector as well as how to apply it to the task of object pose estimation. The object pose estimation system is extended to work with 3D data.
A novel way of handling sampling of range data when constructing a detector is discussed. A volume rasterization method is presented and the classic Harris detector is adapted to it. Finally, a novel region detector, called Maximally Robust Range Regions, is presented. All developed detectors are compared in a detector repeatability test.
@phdthesis{diva2:325008,
author = {Viksten, Fredrik},
title = {{Local Features for Range and Vision-Based Robotic Automation}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1325}},
year = {2010},
address = {Sweden},
}
The need for broadcast encryption arises when a sender wishes to securely distribute messages to varying subsets of receivers, using a broadcast channel, for instance in a pay-TV scenario. This is done by selecting subsets of users and giving all users in the same subset a common decryption key. The subsets will in general be overlapping so that each user belongs to many subsets and has several different decryption keys. When the sender wants to send a message to some users, the message is encrypted using keys that those users have. In this thesis we describe some broadcast encryption schemes that have been proposed in the literature. We focus on stateless schemes which do not require receivers to update their decryption keys after the initial keys have been received; particularly we concentrate on the Subset Difference (SD) scheme.
We consider the effects that the logical placement of the receivers in the tree structure used by the SD scheme has on the number of required transmissions for each message. Bounds for the number of required transmissions are derived based on the adjacency of receivers in the tree structure. The tree structure itself is also studied, also resulting in bounds on the number of required transmissions based on the placement of the users in the tree structure.
By allowing a slight discrepancy between the set of receivers that the sender intends to send to and the set of receivers that actually can decrypt the message, we can reduce the cost in number of transmissions per message. We use the concept of distortion to quantify the discrepancy and develop three simple algorithms to illustrate how the cost and distortion are related.
@phdthesis{diva2:20669,
author = {Anderson, Kristin},
title = {{Tree Structures in Broadcast Encryption}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1215}},
year = {2005},
address = {Sweden},
}
Media such as movies and images are nowadays produced and distributed digitally. It is usually simple to make copies of digital content. Consequently illegal pirate copies can be duplicated and distributed in large quantities. One way to deter authorized content receivers from illegally redistributing the media is watermarking. If individual watermarks are contained in the digital media and a receiver is a pirate and redistributes it, the pirate at the same time distributes his identity. Thus a located pirate copy can be traced back to the pirate. The watermarked media should otherwise be indistinguishable from the original media content.
To distribute media content scalable transmission methods such as broadcast and multicast should be used. This way the distributor will only need to transmit the media once to reach all his authorized receivers. But since the same content is distributed to all receivers the requirement of individual watermarks seems to be contradictory.
In this thesis we will show how individually watermarked media content can be transmitted in a scalable way. Known methods will be reviewed and a new method will be presented. The new method is independent of what type of distribution that is used. A system with robust watermarks that are difficult to remove is described. Only small parts of the media content will be needed to identify the pirates. The method will only give a small data expansion compared to distribution of non-watermarked media.
We will also show how information theory tools can be used to expand the amount of data in the watermarks given a specific size of the media used for the watermarking. These tools can also be used to identify parts of the watermark that have been changed by deliberate deterioration of the watermarked media, made by pirates.
@phdthesis{diva2:20656,
author = {Stenborg, Karl-Göran},
title = {{Distribution and Individual Watermarking of Streamed Content for Copy Protection}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1212}},
year = {2005},
address = {Sweden},
}
Cryptographic operations are normally carried out by a single machine. Sometimes, however, this machine cannot be trusted completely. Threshold cryptography offers an alternative where the cryptographic operation is distributed to a group of machines in such a way that the key used in the cryptographic operation is not revealed to anyone. The tool used to achieve this is threshold secret sharing, by which a secret can be distributed among a group so that subsets (of the members of the group) that are larger than some threshold can cooperate to recover the secret, while subsets smaller than this threshold cannot.
This thesis concerns distributed stream ciphers which is a generalisation of threshold cryptography in the sence that the suggested scheme is not restricted to the use of threshold secret sharing schemes. We describe how to do distributed decryption of a ciphertext encrypted by an additive stream cipher. The system works for any secret sharing scheme that is linear under addition.
We present a modification of how secret sharing of sequences is done. Due to this modification we can generate shares locally using linear feedback shift registers instead of transmitting shares of each symbol in a sequence. A distributed decryption scheme where the keystream is distributed in this modified way is constructed.
@phdthesis{diva2:1179006,
author = {Öberg, Magnus},
title = {{Distributed stream ciphers}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1021}},
year = {2003},
address = {Sweden},
}
In this thesis, the question "What kind of models can be used to describe microcosmos?" will be discussed. Being difficult and very large in scope, the question has here been restricted to whether or not Local Realistic models can be used to describe Quantum-Mechanical processes, one of a collection of questions often referred to as Quantum Paradoxes. Two such paradoxes will be investigated using techniques from probability theory: the Bell inequality and the Greenberger-Horne-Zeilinger (GHZ) paradox.
A problem with the two mentioned paradoxes is that they are only valid when the detectors are 100% efficient, whereas present experimental efficiency is much lower than that. Here, an approach is presented which enables a generalization of both the Bell inequality and the GHZ paradox to the inefficient case. This is done by introducing the concept of change of ensemble, which provides both qualitative and quantitative information on the nature of the "loophole" in the 100% efficiency prerequisite, and is more fundamental in this regard than the efficiency concept. Efficiency estimates are presented which are easy to obtain from experimental coincidence data, and a connection is established between these estimates and the concept of change of ensemble.
The concept is also studied in the context of Franson interferometry, where the Bell inequality cannot immediately be used. Unexpected subtleties occur when trying to establish whether or not a Local Realistic model of the data is possible even in the ideal case. A Local Realistic model of the experiment is presented, but nevertheless, by introducing an additional requirement on the experimental setup it is possible to refute the mentioned model and show that no other Local Realistic model exists.
@phdthesis{diva2:259446,
author = {Larsson, Jan-Åke},
title = {{Quantum paradoxes, probability theory, and change of ensemble}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 654}},
year = {2000},
address = {Sweden},
}
Other
@misc{diva2:437264,
author = {Ragnemalm, Ingemar},
title = {{Polygons feel no pain}},
howpublished = {},
year = {2008},
}
@misc{diva2:263193,
author = {Olofsson, Mikael and Ericson, Thomas and Forchheimer, Robert},
title = {{Telecommunication Methods}},
howpublished = {},
year = {2007},
}
@misc{diva2:263185,
author = {Olofsson, Mikael and Ericson, Thomas and Forchheimer, Robert and Henriksson, Ulf},
title = {{Basic Telecommunication}},
howpublished = {},
year = {2006},
}
Reports
In this work we present a region detector, an adaptation to range data of the popular Maximally Stable Extremal Regions (MSER) region detector. We call this new detector Maximally Robust Range Regions (MRRR). We apply the new detector to real range data captured by a commercially available laser range camera. Using this data we evaluate the repeatability of the new detector and compare it to some other recently published detectors. The presented detector shows a repeatability which is better or the same as the best of the other detectors. The MRRR detector also offers additional data on the detected regions. The additional data could be crucial in applications such as registration or recognition.
@techreport{diva2:325006,
author = {Viksten, Fredrik and Forss\'{e}n, Per-Erik},
title = {{Maximally Robust Range Regions}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2010},
type = {Other academic},
number = {LiTH-ISY-R, 2961},
address = {Sweden},
}
Recent years have seen a lot of work on local descriptors. In all published comparisons or evaluations, the now quite well-known SIFT-descriptor has been one of the top performers. For the application of object pose estimation, one comparison showed a local descriptor, called the Patch-duplet, of equal or better performance than SIFT. This paper examines different properties of those two descriptors by constructing and evaluating hybrids of them. We also extend upon the object pose estimation experiments of the original Patch-duplet paper. All tests use real images. We also show what impact camera calibration and image rectification has on an application such as object pose estimation. A new feature based on the Patch-duplet descriptor and the DoG detector emerges as the feature of choice under illuminiation changes in a real world application.
@techreport{diva2:325005,
author = {Viksten, Fredrik},
title = {{Object Pose Estimation using Variants of Patch-Duplet and SIFT Descriptors}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2010},
type = {Other academic},
number = {LiTH-ISY-R, 2950},
address = {Sweden},
}
This document is an addendum to the main text in A local geometry-based descriptor for 3D data applied to object pose estimation by Fredrik Viksten and Klas Nordberg. This addendum gives proofs for propositions stated in the main document. This addendum also details how to extract information from the fourth order tensor refered to as S22 in the main document.
@techreport{diva2:325000,
author = {Nordberg, Klas and Viksten, Fredrik},
title = {{A local geometry based descriptor for 3D data:
Addendum on rank and segment extraction}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2010},
type = {Other academic},
number = {LiTH-ISY-R, 2951},
address = {Sweden},
}
Watermarking embeds a signature in digital documents and can be used for discarding illegal copying. Fingerprinting means that every digital document have an individual signature. Multicast is a way to distribute the same digital object to many users without the need to sent one specific copy to each user. To combine distribution of fingerprinted videostreams with multicast might at first glance seem to be impossible. This paper will review some attempts to achieve this.
@techreport{diva2:244477,
author = {Stenborg, Karl-Göran},
title = {{Fingerprinted Video through Multicast Distribution}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2004},
type = {Other academic},
number = {, },
address = {Sweden},
}
In a fingerprinting system, the tracing properties are not properties only of the code, they are also dependent of how descendant words can be created. In this correspondence a simple characterization of descendant set models is presented, and relations between different tracing properties in these descendant set models are derived.
@techreport{diva2:244385,
author = {Lindkvist, Tina and Löfvenberg, Jacob},
title = {{Descendant Set Models in Fingerprinting Systems}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2004},
type = {Other academic},
number = {LiTH-ISY-R, 2596},
address = {Sweden},
}
We consider the subset difference scheme for broadcast encryption and count the number of required transmissions when using this scheme. The subset scheme organizes receivers in a tree structure and we note that isomorphic trees yield the same number of required transmissions. We then study the group properties of isomorphism classes of trees. Finally we formulate some research questions for further study of the performance of the subset difference scheme.
@techreport{diva2:243457,
author = {Anderson, Kristin and Claesson, Fredrik and Ingemarsson, Ingemar},
title = {{Broadcast Encryption and Group Codes}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2004},
type = {Other academic},
number = {LiTH-ISY-R, 2605},
address = {Sweden},
}
This report considers the subset difference scheme for broadcast encryption and the number of broadcast transmissions required when using this scheme. For cases where the privileged users are gathered in a few groups we derive the worst case number of transmissions. We also present an upper bound for the number of transmissions based on the number of transitions between privileged and nonprivileged users in the user set.
@techreport{diva2:243359,
author = {Anderson, Kristin},
title = {{Performance of the Subset Difference Scheme for Broadcast Encryption}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2004},
type = {Other academic},
number = {LiTH-ISY-R, 2618},
address = {Sweden},
}
Student theses
As one of the more mature quantum technologies, quantum random number generators (QRNGs) fill an important role in producing secure and private keys for use in cryptogra- phy in e.g. quantum key distribution (QKD) systems. Many available QRNGs are expen- sive and optical QRNGs often require complex optical setups. If a reliable QRNG could be implemented using less expensive components they could become more widespread and be used in common applications like encryption and simulation. Shot noise is a possible entropy source for these kinds of random number generators. For such a generator to be classified as a QRNG the origin of the shot noise must be controlled and verifiable. This project aims to investigate how an entropy source could be implemented using the shot noise generated by an illuminated photodiode. This requires the design and construction of the accompanying electro-optical front-end used to prepare a signal for sampling.
The successful estimation of the electron charge e is used as a way to verify that shot noise is present in the sampled signal. Suitable component values and operating points are also in- vestigated and it is shown that quite low gain (10 000) is suitable for the current-to-voltage amplifier which amplifies the signal generated by the photodiode. For this configuration an estimate of e was achieved with a relative error of 3%.
In conclusion this is a promising and interesting approach for generating random numbers at high rates and at low cost. Whether the correct estimation of e is enough to certify that the device is sampling noise from the quantum regime is however not completely clear and further investigation is likely needed.
@mastersthesis{diva2:1822238,
author = {Clason, Martin},
title = {{Development of a QRNG front-end for shot noise measurement:
analysis of quantum shot noise originating from photodiodes}},
school = {Linköping University},
type = {{LIU-ISY/LITH-EX-A--23/5621--SE}},
year = {2023},
address = {Sweden},
}
This Master’s thesis investigates the application of dynamically generated procedural terrain textures for texturing 3D representations of the Earth’s surface. The study explores techniques to overcome limitations of the currently most common method – projecting satellite imagery onto the mesh – such as insufficient resolution for close-up views and challenges in accommodating external lighting models.
Textures for sand, rock and grass were generated procedurally on the GPU. Aliasing was prevented using a clamping technique, dynamically changing the level of detail when freely navigating across diverse landscapes. The general color of each terrain type was extracted from the satellite images, guided by land cover rasters, in a process where shadows were eliminated using HSV color space conversion and filtering.
The procedurally generated textures provide significantly more details than the satellite images in close-up views, while missing some information in medium- to far-distance views, due to the satellite images containing information lacking in the 3D mesh.
A qualitative analysis spanning six data sets from diverse global locations demonstrates that the proposed methods are applicable across a range of landscapes and climates.
@mastersthesis{diva2:1805880,
author = {Pohl Lundgren, Anna},
title = {{Procedural Natural Texture Generation on a Global Scale}},
school = {Linköping University},
type = {{LiTH-ISY-EX--23/5610--SE}},
year = {2023},
address = {Sweden},
}
This thesis explores possible improvements, using parallel computing, to the PSF-alignment and image subtraction algorithm found in HOTPANTS. In time-domain astronomy the PSF-alignment and image subtraction algorithm OIS is used to identify transient events. hotpants is a software package based on OIS, the software package ISIS, and other subsequent research done to improve OIS. A parallel GPU implementation of the algorithm from HOTPANTS – henceforth known as BACH –was created for this thesis. The goal of this thesis is to answer the questions: “what parts of HOTPANTS are most suited for parallelisation?” and “how does bach perform compared to HOTPANTS and SFFT?”, another PSF-alignment and image subtraction tool. The authors found that the parts most susceptible to parallelisation were the convolution and subtraction steps. However, the subtraction did not display a significant improvement to its sequential counterpart. The other parts of HOTPANTS were deemed too complex to implement in parallel on the GPU. However, some parts could probably either be partly parallelised on the GPU or parallelised usingthe CPU. BACH was always as fast as or faster than HOTPANTS; it was generally 2 times faster, but was up to 4.5 times faster in some test cases. It was also faster than SFFT, but this result was not equivalent to the result presented in [15], which is why the authors of this thesis believe something was wrong with either the installation of SFFT or the hardware used to test it.
@mastersthesis{diva2:1790545,
author = {Wång, Annie and Lells, Victor},
title = {{Parallelising High OrderTransform of Point SpreadFunction and TemplateSubtraction for AstronomicImage Subtraction:
The implementation of BACH}},
school = {Linköping University},
type = {{LiTH-ISY-EX--23/5581--SE}},
year = {2023},
address = {Sweden},
}
With the growing need for secure and high-capacity communications, innovative solutions are needed to meet the demands of tomorrow. One such innovation is to make use of the still unutilized spatial dimension of light in communications, which has promising applications in both enabling higher data traffic as well as the security protocols of the future in quantum communications. The perhaps most promising way of realizing this technology is through spatial division multiplexing (SDM) in optical fibers. There are many challenges and open questions in implementing this, such as how perturbations to the signal should be kept under control and which type of optical fiber to use. Consequently, this thesis focuses on the implementation of SDM in few-mode fibers where the perturbation effects on the spatial distribution have been investigated. Following this investigation, an implementation of adaptive spatial mode control using a motorized polarization controller has been implemented. The mode control has been done with the focus on having relevance for quantum technology applications such as Quantum Key Distribution (QKD) and quantum random number generation (QRNG) but also for spatial division multiplexing (SDM) for general communications. For this reason, two evaluation metrics have been optimized for: extinction ratio and equal amplitude. The control algorithm used is an adaptation of the optimization algorithm Stochastic Parallel Gradient Descent (SPGD). Control has been achieved in stabilizing the extinction ratio of LP11a and LP11b over 12 hours with an average extinction ratio of 98 %. Additionally, equal amplitude between LP11a and LP11b has been achieved over 1 hour with an average relative difference of 0.42 % and 0.45 %. Out of the perturbation effects investigated; temperature caused large disturbances to the signal which later is corrected for with the implemented algorithm.
@mastersthesis{diva2:1766160,
author = {Pihl, Oscar},
title = {{Characterization and Stabilization of Transverse Spatial Modes of Light in Few-Mode Optical Fibers}},
school = {Linköping University},
type = {{LiTH-ISY-EX-{}-23/5571--SE}},
year = {2023},
address = {Sweden},
}
This master’s thesis explores a space-division-multiplexing (SDM) platform fora delayed-choice experiment. SDM is a multiplexing technique for optical datatransmission that employs spatial modes in a multi- or few-mode fiber to increasethe transmission capacity. The spatial modes can thus be used as separate channels. SDM have shown great potential for quantum information systems, making it intriguing to investigate its broad applications by examining its use in adelayed-choice experiment. The delayed-choice experiment was proposed by J.A.Wheeler in 1978 explored the particle- and wave-like behavior of quantum particles and observe if the particle knows in advance if it should propagate as a waveor a particle through the experimental platform. Hence, it was suggested thatthe experiment should be changed after the particle entered the experimentalplatform. The experiment has afterward been realized in many different constellations but previous wave-particle delayed-choice experiments have not beendemonstrated with SDM nor with an all in fiber platform.
The research involved modeling and constructing a SDM fiber-optic platform,only utilizing commercially available fiber optical telecommunication components. The platform was constructed with photonic lanterns, used as spatial division multiplexer and demultiplexer, and a two-input fiber Sagnac Interferometer,as a removable beam splitter. The system was tested with classical light but without difficulties, the platform could move to the quantum domain for performingthe delayed-choice experiment with single photons on the platform. The thesis resulted in a SDM platform with good performance for future measurement of bothparticle- and wave-like behavior of photons in a delayed-choice experiment.
@mastersthesis{diva2:1765303,
author = {Karlsson, Hilma},
title = {{Space-Division-Multiplexing Platform for a Delayed-Choice Experiment}},
school = {Linköping University},
type = {{LiTH-ISY-EX--23/5566--SE}},
year = {2023},
address = {Sweden},
}
This thesis explores the possibilities of creating landscapes through procedural means within the game engine Unreal Engine 5. The aim is to provide a flexible procedural landscape tool that doesn't limit the user and that is compatible with existing systems in the engine. The research questions focuses on comparison to other work regarding landscape generation and generation of procedural roads.
The process to achieve this was done through extensive implementation adding modules that both builds upon and adds to the source code. The implementation was divided into five major components, which was noise generation for terrain, biotope interpolation, asset distribution, road generation and a user interface.
Perlin noise, utilizing Fractal Brownian Motion were a vital part of generating terrain with varying features. For interpolation a modified version of Lowpass Gaussian filtering was implemented in order to blend biotope edges together. Asset distribution and road generation were implemented in a way that uses pseudo-randomness combined with heuristics. The user interface was done to tie everything together for testing.
The results shows potential for assisting in procedural landscape creation with a large amount of freedom in customization. There is however flaws in some aspects, namely the interpolation methods suffer from clear visual artefacts. Whether it is suitable for professional standards remains to be fully proven objectively as the testing in this thesis work was limited.
@mastersthesis{diva2:1759144,
author = {Sjögren, Viktor and Malteskog, William},
title = {{Procedural Worlds:
A proposition for a tool to assist in creation of landscapes byprocedural means in Unreal Engine 5}},
school = {Linköping University},
type = {{LiTH-ISY-EX--23/5546--SE}},
year = {2023},
address = {Sweden},
}
Knowing the shapes, sizes and positional relations between features in an image can be useful for different types of image processing.Using a Distance Transform can give us these properties as a Distance Map.There are many different variations of distance transforms that can increase accuracy or add functionality, two such transforms are the Anti-Aliased Euclidean Distance Transform and the Signed Euclidean Distance Transform.To get the benefits of both of these it is of interest to see if they can be combined and if so, how does it perform?Investigating the possibility of such a transform is the main object of this thesis.
To create this combined transform a copy of the image was created and then inverted, both images are transformed and the resulting distance maps are combined into one.Signed distance maps are created for three transforms using this method. The transforms in question are, EDT, AAEDT and VAAEDT.All transforms are then evaluated using a series of images containing two randomly placed circles, the circles are created using simple Signed Distance Functions.
The signed transforms work and the AAEDT performs well compared to the Signed Euclidean Distance Transform.These results were expected as a similar gap in results can be seen between the regular EDT and AAEDT.But, this transform is not perfect and there is room for improvements in the accuracy, a good start for future work.
@mastersthesis{diva2:1691141,
author = {Johanssson, Erik},
title = {{Signed Anti-Aliased Euclidean Distance Transform:
Going from unsigned to signed with the assistance of a vector based method}},
school = {Linköping University},
type = {{LiTH-ISY-EX--22/5521--SE}},
year = {2022},
address = {Sweden},
}
Communication has always been a vital part of our society, and day-to-day communication is increasingly becoming more digital. VoIP (voice over IP) is used for real-time communication, and to be able to send the information over the internet must the speech be compressed to lower the number of bits needed for transmission. Codecs are used to compress the speech, or any other type of data transmitting over a network, which can introduce some noise if lossy compression is used. Depending on the bandwidth, bit rate, and codec used can distortion be minimized which would result in higher perceived speech quality.
In the thesis, two codecs, G729D and Opus, were tested and evaluated with two different objective perceive speech quality metrics, POLQA and PESQ. The codecs were also tested with different emulated network scenarios, 2G, 3G, 4G, satellite two-hop, and LAN. Furthermore, Opus was tested with and without VAD (voice activity detection) to see how VAD could affect the perceived speech quality. The different network scenarios did not impact the results of the evaluation, since the main difference between the network scenarios was latency, which POLQA and PESQ do not consider in the evaluation. Opus achieved a higher MOS-LQO (mean opinion score listening quality objective) than G729D. However, when VAD was enabled with Opus for a low bit rate, 8 kbit/s, the MOS-LQO was lower than without VAD.
@mastersthesis{diva2:1671319,
author = {Alm\'{e}r, Louise},
title = {{Evaluation of the Perceived Speech Quality for G729D and Opus:
With Different Network Scenarios and an Implemented VoIP Application}},
school = {Linköping University},
type = {{LiTH-ISY-EX--22/5475--SE}},
year = {2022},
address = {Sweden},
}
Today, a modern and interesting research area is machine learning. Another new and exciting research area is quantum computation, which is the study of the information processing tasks accomplished by practising quantum mechanical systems. This master thesis will combine both areas, and investigate quantum machine learning.
Kerenidis’ and Prakash’s quantum algorithm for recommendation systems, that offered exponential speedup over the best known classical algorithms at the time, will be examined together with Tang’s classical algorithm regarding recommendation systems, which operates in time only polynomial slower than the previously mentioned algorithm.
The speedup in the quantum algorithm was achieved by assuming that the algorithm had quantum access to the data structure and that the mapping to the quantum state was performed in polylog(mn). The speedup in the classical algorithm was attained by assuming that the sampling could be performed in O(logn) and O(logmn) for vectors and matrices, respectively.
@mastersthesis{diva2:1665668,
author = {Sköldhed, Stefanie},
title = {{De-quantizing quantum machine learning algorithms}},
school = {Linköping University},
type = {{LiTH-ISY-EX--22/5470--SE}},
year = {2022},
address = {Sweden},
}
Quantum mechanics has played a big role in the development of our understanding of the smallest things in the universe. It has provided descriptions for phenomena like single electrons or single photons, which are single particles of light. One of the most mysterious properties of quantum systems is the ability to behave as a particle or a wave. In 1978, J. A. Wheeler devised an experiment to investigate if a quantum system knows in advance if it should propagate as a wave or as a particle through an experiment, by changing the experiment after the quantum system has entered the experimental set-up.
Here an optical all-in fiber platform for a Wheeler's delayed choice experiment is modeled, constructed and tested using commercially available fiber optic components. This is in contrast to previous delayed choice experiments, which have used free-space components in some parts of their experimental set-up. The optical set-up was modeled and simulated using a quantum formalism, with future work in mind if the platform is used to perform a quantum delayed-choice experiment.
The platform used a Sagnac interferometer as the second beamsplitter in a Mach-Zehnder interferometer, to perform the choice of measuring either particle or wave properties. Using a fiber platform, the length of the platform can easily be extended with more fiber to accommodate a large separation between the beamsplitter in the beginning of the set-up, and the Sagnac interferometer at the end of the set-up. The result was a stable platform to measure particle behavior of light with good performance, and the ability to switch between these measurements on the fly. The system was tested with classical light, but the light source can be changed from a laser, to for example an attenuated laser, to enter the quantum domain for performing a quantum delayed-choice experiment using the platform.
@mastersthesis{diva2:1662064,
author = {Åhlgren, Gustaf},
title = {{A Platform for a Wheeler's Delayed-Choice Experiment in Optical Fiber}},
school = {Linköping University},
type = {{LiTH-ISY-EX--22/5462--SE}},
year = {2022},
address = {Sweden},
}
A MBS (Multi-port beamsplitter) for higher dimensional quantum communication has been designed and constructed and the theory and method for this is presented in this thesis. It uses optical fibers in a heterogeneous structure with a single-mode fiber spliced to a multi-mode fiber and then spliced to a few-mode fiber. Three MBS:s were constructed and tested to see if superpositions between spatial modes could be generated. One with 5.65cm multi-mode fiber, one with 9cm of multi-mode fiber and one with just the single-mode fiber spliced to the few-mode fiber. The optical modes that where focused on for the superposition were the linear polarized LP01, LP11a and LP11b modes. Simulations of superpositions between these modes were performed and experiments were done to see if these simulations could be realised. The shapes of these superpositions could be seen with a camera and the stability of the different modal powers and the stability of the phases between the modes where also tested. The last experiment tested the tunability of the modes by finding their maximum and minimum output power for each individual mode. The results of these experiments show that the stability of power and relative phases are high and testing of the tunability shows that the 9cm MBS is the most tunable, the 5.65cm MBS the second best and the SMF-FMF MBS the worst. Even though the shapes of the superpositions, the stability and tunability shows very positive results, the conclusion is that more experiments are required in order to identify the superpositions and for this to be used in a quantum communication system.
@mastersthesis{diva2:1662090,
author = {Spegel-Lexne, Daniel},
title = {{Design and Construction of a Multi-Port Beamsplitter Based on Few-Mode-Fibers}},
school = {Linköping University},
type = {{LiTH-ISY-EX--22/5463--SE}},
year = {2022},
address = {Sweden},
}
Hit registration algorithms in First-Person Shooter games define how the server processes gunfire from clients. Network conditions, such as latency, cause a mismatch between the game worlds observed at the client and the server. To improve the experience for clients when authoritative servers are used, the server attempts to reconcile the differing views when performing hit registration through techniques known as lag compensation. This thesis surveys recent hit registration techniques and discusses how they can be implemented and evaluated with the use of a modern game engine. To this end, a lag compensation model based on animation pose rewind is implemented in Unreal Engine 4. Several programming models described in industry and research are used in the implementation, and experiences from further integrating the techniques into a commercial FPS project are also discussed. To reason about the accuracy of the algorithm, client-server discrepancy metrics are defined, as well as a hit rate metric which expresses the worst-case effect on the shooting experience of a player. Through automated tests, these metrics are used to evaluate the hit registration accuracy. The rewind algorithm was found to make the body-part-specific hit registration function well independently of latency. At high latencies, the rewind algorithm is completely necessary to make sure that clients can still aim at where they perceive their targets to be and expect their hits to be registered. Still, inconsistencies in the results remain, with hit rate values sometimes falling below 50%. This is theorized to be due to fundamental networking mechanisms of the game engine which are difficult to control. This presents a counterpoint to the otherwisegained ease of implementation when using Unreal Engine.
@mastersthesis{diva2:1605200,
author = {Jonathan, Lundgren},
title = {{Implementation And Evaluation Of Hit Registration In Networked First Person Shooters}},
school = {Linköping University},
type = {{LiTH-ISY-EX--21/5369--SE}},
year = {2021},
address = {Sweden},
}
The constantly increasing amount of shared data worldwide demands a continuously improved understanding of current smartphone security vulnerabilities and limitations to ensure secure communication. Securing sensitive enterprise data on a Bring Your Own Device (BYOD) setup can be quite challenging. Allowing multiple applications to communicate through Inter-process Communication (IPC) in a shared environment can induce a wide range of security vulnerabilities if not implemented adequately. In this thesis, multiple different IPC mechanisms have been investigated and applied with respect to confidentiality, integrity, and availability (CIA-triad) of a system including an Android application and a server, to enable a secure Single Sign-On (SSO) solution. Relevant threats were identified that could highlight vulnerabilities related to the use of IPC mechanisms provided by the Android OS such as AIDL, Messenger, Content Provider, and Broadcast Receiver. A Proof-of-Concept (POC) system for each IPC mechanism was developed and implemented with targeted mitigation techniques (MT) and best practices to ensure a high level of conformity with the CIA-triad. Additionally, each IPC mechanism was evaluated through a set of functional tests, a Grey-box penetration testing approach, and a performance analysis of the execution time and the total Lines-of-Code (LOC) required. The results shows that there are indeed different ways of achieving secure communication on the Android OS and thereby enabling a secure SSO solution by ensuring the inclusion of related MTs to prevent critical security vulnerabilities. Also, the IPC mechanism with the highest performance in relation to execution time and LOC is shown to be AIDL.
@mastersthesis{diva2:1555512,
author = {Holmberg, Daniel},
title = {{Secure IPC To Enable Highly Sensitive Communication In A Smartphone Environment With A BYOD Setup}},
school = {Linköping University},
type = {{LiTH-ISY-EX--21/5368--SE}},
year = {2021},
address = {Sweden},
}
Multiport beam splitter is a new research topic in quantum communication. To improve the security system, the dimension/capacity of quantum communication should increase. In this thesis, design, simulation and methodology of NXN multiport beam splitter on a photonic integrated circuit is explained. Photonic integrated circuit has more advantages than other optical components to design a multiport beam splitter. Multiport beam splitter on a photonic chip gives configuration stability, a compact prototype for future quantum network.
@mastersthesis{diva2:1577195,
author = {Saha, Susmita},
title = {{Design and optimization of multi-port beam splitters on integrated photonic circuits}},
school = {Linköping University},
type = {{LiTH-ISY-EX--21/5412--SE}},
year = {2021},
address = {Sweden},
}
An intelligent sensor system has the potential of providing its operator with relevant information, lowering the risk of human errors, and easing the operator's workload. One way of creating such a system is by using reinforcement learning, and this thesis studies how reinforcement learning can be applied to a simple sensor control task within a detailed 3D rendered environment. The studied agent controls a stationary camera (pan, tilt, zoom) and has the task of finding stationary targets in its surrounding environment. The agent is end-to-end, meaning that it only uses its sensory input, in this case images, to derive its actions. The aim was to study how an agent using a simple neural network performs on the given task and whether behavior cloning can be used to improve the agent's performance.
The best-performing agents in this thesis developed a behavior of rotating until a target came into their view. Then they directed their camera to place the target at the image center. The performance of these agents was not perfect, their movement contained quite a bit of randomness and sometimes they failed their task. But even though the performance was not perfect, the results were positive since the developed behavior would be able to solve the task efficiently given that it is refined. This indicates that the problem is solvable using methods similar to ours. The best agent using behavior cloning performed on par with the best agent that did not use behavior cloning. Therefore, behavior cloning did not lead to improved performance.
@mastersthesis{diva2:1573240,
author = {Eriksson, Rickard},
title = {{Deep Reinforcement Learning Applied to an Image-Based Sensor Control Task}},
school = {Linköping University},
type = {{LiTH-ISY-EX--21/5410--SE}},
year = {2021},
address = {Sweden},
}
The constantly increasing amount of shared data worldwide demands a continuously improved understanding of current smartphone security vulnerabilities and limitations to ensure secure communication. Securing sensitive enterprise data on a Bring Your Own Device (BYOD) setup can be quite challenging. Allowing multiple applications to communicate through Inter-process Communication (IPC) in a shared environment can induce a wide range of security vulnerabilities if not implemented adequately. In this thesis, multiple different IPC mechanisms have been investigated and applied with respect to confidentiality, integrity, and availability (CIA-triad) of a system including an Android application and a server, to enable a secure Single Sign-On (SSO) solution. Relevant threats were iden- tified that could highlight vulnerabilities related to the use of IPC mechanisms provided by the Android OS such as AIDL, Messenger, Content Provider, and Broadcast Receiver. A Proof-of-Concept (POC) system for each IPC mechanism was developed and implemented with targeted mitigation techniques (MT) and best practices to ensure a high level of con- formity with the CIA-triad. Additionally, each IPC mechanism was evaluated through a set of functional tests, a Grey-box penetration testing approach, and a performance analysis of the execution time and the total Lines-of-Code (LOC) required. The results shows that there are indeed different ways of achieving secure communication on the Android OS and thereby enabling a secure SSO solution by ensuring the inclusion of related MTs to prevent critical security vulnerabilities. Also, the IPC mechanism with the highest performance in relation to execution time and LOC is shown to be AIDL.
@mastersthesis{diva2:1552618,
author = {Holmberg, Daniel},
title = {{Secure IPC to Enable Highly Sensitive Communication in a Smartphone Environment with a BYOD Setup}},
school = {Linköping University},
type = {{LiTH-ISY-EX--21/5368--SE}},
year = {2021},
address = {Sweden},
}
@mastersthesis{diva2:1500007,
author = {Sjöberg, Oscar},
title = {{Evaluating Image Compression Methods on Two DimensionalHeight Representations}},
school = {Linköping University},
type = {{}},
year = {2020},
address = {Sweden},
}
Cyber attacks happen on a daily basis, where criminals can aim to disrupt internet services or in other cases try to get hold of sensitive data. Fortunately, there are systems in place to protect these services. And one can rest assured that communication channels and data are secured under well-studied cryptographic schemes.
Still, a new class of computation power is on the rise, namely quantum computation. Companies such as Google and IBM have in recent time invested in research regarding quantum computers. In 2019, Google announced that they had achieved quantum supremacy. A quantum computer could in theory break the currently most popular schemes that are used to secure communication.
Whether quantum computers will be available in the forseeable future, or at all, is still uncertain. Nonetheless, the implication of a practical quantum computer calls for a new class of crypto schemes; schemes that will remain secure in a post-quantum era. Since 2016 researchers within the field of cryptography have been developing post-quantum cryptographic schemes.
One specific branch within this area is lattice-based cryptography. Lattice-based schemes base their security on underlying hard lattice problems, for which there are no currently known efficient algorithms that can solve them. Neither with quantum, nor classical computers. A promising scheme that builds upon these types of problems is Kyber. The aforementioned scheme, as well as its competitors, work efficiently on most computers. However, they still demand a substantial amount of computation power, which is not always available. Some devices are constructed to operate with low power, and are computationally limited to begin with. This group of constrained devices, includes smart cards and microcontrollers, which also need to adopt the post-quantum crypto schemes. Consequently, there is a need to explore how well Kyber and its relatives work on these low power devices.
In this thesis, a variant of the cryptographic scheme Kyber is implemented and evaluated on an Infineon smart card. The implementation replaces the scheme’s polynomial multiplication technique, NTT, with Kronecker substitution. In the process, the cryptographic co-processor on the card is leveraged to perform Kronecker substitution efficiently. Moreover, the scheme’s original functionality for sampling randomness is replaced with the card’s internal TRNG.
The results show that an IND-CPA secure variant of Kyber can be implemented on the smart card, at the cost of segmenting the IND-CPA functions. All in all, key generation, encryption, and decryption take 23.7 s, 30.9 s and 8.6 s to execute respectively. This shows that the thesis work is slower than implementations of post-quantum crypto schemes on similarly constrained devices.
@mastersthesis{diva2:1464485,
author = {Eriksson, Hampus},
title = {{Implementing and Evaluating the Quantum Resistant Cryptographic Scheme Kyber on a Smart Card}},
school = {Linköping University},
type = {{LiTH-ISY-EX--20/5333--SE}},
year = {2020},
address = {Sweden},
}
This thesis presents a comparison between different algorithms for optimal scanline voxelization of 3D models.As the optimal scanline relies on line voxelization, three such algorithms were evaluated. These were Real Line Voxelization (RLV), Integer Line Voxelization (ILV) and a 3D Bresenham line drawing algorithm. RLV and ILV were both based on voxel traversal by Amanatides and Woo. The algorithms were evaluated based on runtime and the approximation error of the integer versions, ILV and Bresenham. The result was that RLV performed better in every case, with ILV being 20-250% slower and Bresenham being 20-500% slower. The error metric used was the Jaccard distance and generally started at 20% and grew up towards 25% for higher voxel resolutions. This was true for both ILV and Bresenham. The conclusion was that there is no reason to use any of the integer versions over RLV. As they both performed and approximated the original 3D model worse.
@mastersthesis{diva2:1459330,
author = {Håkansson, Tim},
title = {{A Comparison of Optimal Scanline Voxelization Algorithms}},
school = {Linköping University},
type = {{LiTH-ISY-EX--20/5307--SE}},
year = {2020},
address = {Sweden},
}
With increasing awareness of potential security threats there is a growing interest in communication security for spacecraft control and data. Traditionally commercial and scientific missions have relied on their uniqueness to prevent security breaches. During time the market has changed with open systems for mission control and data distribution, increased connectivity and the use of existing and shared infrastructure. Therefore security layers are being introduced to protect spacecraft communication. In order to mitigate the perceived threats, the Consultative Committee for Space Data Systems (CCSDS) has proposed the addition of communication security in the various layers of the communication model. This thesis describes and discuss their proposal and look into how this application should be implemented into the data link layer of the communication protocol to protect from timing attacks. An implementation of AES-CTR+GMAC is constructed in software to compare different key lengths and another implementation is constructed in synthesized VHDL for use on hardware to investigate the impact on area consumption on the FPGA as well as if it is possible to secure it from cache-timing attacks.
@mastersthesis{diva2:1462858,
author = {Sundberg, Sarah},
title = {{Data Link Layer Security for Spacecraft Communication Implementation on FPGA}},
school = {Linköping University},
type = {{LiTH-ISY-EX--20/5332--SE}},
year = {2020},
address = {Sweden},
}
Geometric shapes can be represented in a variety of different ways. A distancemap is a map from points to distances. This can be used as a shape representationwhich can be created through a process known as a distance transform. This the-sis project tests a method for three-dimensional distance transforms using frac-tional volume coverage. This method produces distance maps with subvoxel dis-tance values. The result which is achieved is clearly better than what would beexpected from a binary distance transform and similar to the one known fromprevious work. The resulting code has been published under a free and opensource software license.
@mastersthesis{diva2:1443754,
author = {Emil, Segerbäck},
title = {{Shape Representation Using a Volume Coverage Model}},
school = {Linköping University},
type = {{LiTH-ISY-EX--20/5291--SE}},
year = {2020},
address = {Sweden},
}
This thesis investigates the phenomena of phase stability in a fiber-optical MZI (Mach-Zehnder Interferometer). The MZI is a key building block of optical systems for use in experiments with both continuous-wave light and with single photons. By splitting incoming light into two beams and allowing it to interfere with itself, an interference pattern is visible at the output, and this phenomena can be used to code information. This is the operating principle in, for example, QKD (Quantum Key Distribution) experiments. This interference requires coherence that is higher than the length difference between the beams that the incoming light is split into. Particularly the phase of the beams must be equal to achieve constructive interference. If one beam is phase-shifted (with respect to the other) due to the light having traversed a longer path, only partially constructive interference is achieved. If the phase shift also varies with time this leads to a system where experiments can no longer reliably be performed. Sources of these fluctuations are thermal, acoustic or mechanical. Fiber-optical interferometers are particularly sensitive to path length fluctuations of the waveguides as the fiber-optic medium contracts and elongates with temperature, and also has a larger surface area for circulating air to mechanically disturb the waveguides than bulk optics interferometers.
In this thesis, a solution to environment-induced phase drift is presented by evaluating implementations of feedback algorithms for automatic control. The algorithms PID (Proportional-, Integral-, Derivative controller) and an ICA (IncrementalControl Algorithm) have been investigated and the performance of these controllers has been compared when used with, and without, optical enclosures. The algorithms are implemented in an FPGA (Field-Programmable Gate Array) and the controller actuates an electro-optical phase modulator that can add a phase shift to one of the light beams in the MZI. This thesis shows that significant improvement in the optical stability can be achieved with active control compared to an interferometer without active phase control.
@mastersthesis{diva2:1440568,
author = {Argillander, Joakim},
title = {{Active Phase Compensation in a Fiber-Optical Mach-Zehnder Interferometer}},
school = {Linköping University},
type = {{LiTH-ISY-EX--20/5297--SE}},
year = {2020},
address = {Sweden},
}
This master's thesis was conducted at Sectra Communications AB, where the aim of the thesis was to investigate the state of the art of physical hardware tampering attacks and corresponding protections and mitigations, and finally combining this to a protection model that conforms to the FIPS 140-2 standard. The methods used to investigate and evaluate the different attacks were literature searching, looking for articles presenting different attacks that have been used against real targets, and attacks that there are no records of being attempted on a real target, but are theoretically possible. After this, an attack tree was constructed, which then developed into a flowchart. The flowchart describes and visualizes how the different attacks could take place.
A qualitative risk analysis was conducted to be able to evaluate and classify the different attacks. This showed the attacks that would most likely have the greatest impact on a cryptographic communications device if used in an attack on the device, and also which of these attacks one must prioritize to protect the device against. The attacks that were regarded to have the highest impact on a cryptographic communication device were memory freezing attacks, and radiation imprinting attacks.
After this, a protection model was developed. This was done by placing protection and mitigation in the attack flowchart, showing how one could stop the different attacks. To then investigate the different protections, an evaluation process was conducted. An evaluation process was conducted to investigate the different protections by comparing their attributes to the requirements of the FIPS 140-2 standard. This evaluation process than resulted in a combined protection model that covers the requirements of the FIPS 140-2 standard.
This thesis concludes that there are many different protections available, and to be able to create solutions that protect the intended system one must perform a deep attack vector analysis. Thus, finding the weaknesses, and vulnerabilities one must protect.
@mastersthesis{diva2:1436602,
author = {Johansson, Emil},
title = {{Tamper Protection for Cryptographic Hardware:
A survey and analysis of state-of-the-art tamper protection for communication devices handling cryptographic keys}},
school = {Linköping University},
type = {{LIU-ISY/LITH-EX-A--20/5306--SE}},
year = {2020},
address = {Sweden},
}
Fewer customers in Sweden are using cash in their everyday transactions than ever before. If this trend continues, then the Swedish payment system will, in a few years, be entirely controlled by private companies. Therefore the central bank needs a new digital asset trading platform that can replace the reliance on private companies with a system supplied by a government entity (central bank).
This thesis revolves around the creation of a digital asset trading platform focused on the capital market, which can serve that role. The primary focus of the thesis is to investigate how time can be introduced to a blockchain so that events such as a coupon payment or a dividend can be scheduled to occur at a specific time.
The digital trading platform created as part of this thesis was after creation tested to ascertain the best method of introducing time. The results presented in this thesis show that one of the methods has a higher accuracy, with a 1.3 seconds average between the desired execution time and the actual execution time.
The platform was also used to evaluate the feasibility of a digital "currency" based on blockchains, as a replacement for credit cards supplied by Mastercard or Visa. The results indicate that a blockchain solution is a somewhat feasible replacement while suffering from some disadvantages, primarily in throughput.
@mastersthesis{diva2:1431690,
author = {Petersen, Fabian},
title = {{Scheduling in a Blockchain}},
school = {Linköping University},
type = {{LiTH-ISY-EX--20/5286--SE}},
year = {2020},
address = {Sweden},
}
This thesis presents a comparison of three different parallel algorithms, adapted to calculate the anti-aliased euclidean distance transform. They were originally designed to calculate the binary euclidean distance transform. The three algorithms are; SKW, Jump Flooding Algorithm (JFA), and Parallel Banding Algorithm (PBA). The results presented here show that the two simpler algorithms, SKW and JFA, can easily be adapted to calculate the anti-aliased transform rather than the binary transform. These two algorithms show good performance in regards to accuracy and precision. The more complex algorithm, PBA, is not as easily adapted. The design of this algorithm is based on some assumptions about the binary transform, which do not hold true in the case of the anti-aliased transform. Because of this the algorithm does not produce a transform with the same level of accuracy as the other algorithms produce.
@mastersthesis{diva2:1396944,
author = {Eriksson, Daniel},
title = {{A Comparison of Parallel Algorithms for Calculating the Anti-Aliased Euclidean Distance Transform}},
school = {Linköping University},
type = {{LiTH-ISY-EX--19/5269--SE}},
year = {2020},
address = {Sweden},
}
Secure messaging protocols have seen big improvements in recent years, with pairwise messaging now being possible to perform efficiently with high security guarantees and without requiring both participants to be online at the same time. For group messaging the solutions have either provided lower security guarantees or used highly inefficient implementations in terms of computation time and data usage, with pairwise channels between all group members, limiting the possible applications. Work is now ongoing to introduce the Messaging Layer Security (MLS) protocol as an efficient standard with high security guarantees for messaging in big groups.
This thesis examines whether current MLS implementations live up to the promised performance properties and compares them to the popular Signal protocol. In general the performance results of MLS are promising and in line with expectations, providing improved performance compared to the Signal protocol as group sizes increase. Two proof of concept applications are created to prove the viability of using MLS in realistic scenarios, one for video calls and one for mobile messaging.
@mastersthesis{diva2:1388449,
author = {Lenz, Silas},
title = {{Evaluation of the Messaging Layer Security Protocol:
A Performance and Usability Study}},
school = {Linköping University},
type = {{LiTH-ISY-EX--20/5274--SE}},
year = {2020},
address = {Sweden},
}
In 2004 Robert W. Spekkens introduced a toy theory designed to make a case for the epistemic view of quantum mechanics. But how does Spekkens’ toy model differ from quantum theory? While some differences are well-established, we attempt to approach this question from a tomographic point of view. More specifically, we provide experimentally viableprocedureswhichenablesustocompletelycharacterizethestatesandgatesthatare available in the toy model. We show that, in contrast to quantum theory, decompositions of transformations in the toy model must be done in a non-linear fashion.
@mastersthesis{diva2:1386650,
author = {Andersson, Andreas},
title = {{State and Process Tomography:
In Spekkens' Toy Model}},
school = {Linköping University},
type = {{LiTH-ISY-EX--20/5273--SE}},
year = {2020},
address = {Sweden},
}
Depth of field is a naturally occurring effect in lenses describing the distance between theclosest and furthest object that appears in focus. The effect is commonly used in film andphotography to direct a viewers focus, give a scene more complexity, or to improve aes-thetics. In computer graphics, the same effect is possible, but since there are no naturaloccurrences of lenses in the virtual world, other ways are needed to achieve it. There aremany different approaches to simulate depth of field, but not all are suited for real-time usein computer games. In this thesis, multiple methods are explored and compared to achievedepth of field in real-time with a focus on computer games. The aspect of bokeh is alsocrucial when considering depth of field, so during the thesis, a method to simulate a bokeheffect similar to reality is explored. Three different methods based on the same approachwas implemented to research this subject, and their time and memory complexity weremeasured. A questionnaire was performed to measure the quality of the different meth-ods. The result is three similar methods, but with noticeable differences in both quality andperformance. The results give the reader an overview of different methods and directionsfor implementing it on their own, based on which requirements suits them.
@mastersthesis{diva2:1384527,
author = {Christoffersson, Anton},
title = {{Real-time Depth of Field with Realistic Bokeh:
with a Focus on Computer Games}},
school = {Linköping University},
type = {{LiTH-ISY-EX--19/5261--SE}},
year = {2020},
address = {Sweden},
}
In order to develop secure Internet of Things (IoT) devices, it is vital that security isconsidered throughout the development process. However, this is not enough as vulnerabledevices still making it to the open market. To try and solve this issue, this thesis presentsa structured methodology for performing security analysis of IoT platforms.
The presented methodology is based on a black box perspective, meaning that theanalysis starts without any prior knowledge of the system. The aim of the presentedmethodology is to obtain information in such a way as to recreate the system design fromthe implementation. In turn, the recreated system design can be used to identify potentialvulnerabilities.
Firstly the potential attack surfaces are identified, which the methodology calls inter-faces. These interfaces are the point of communication or interaction between two partsof a system. Secondly, since interfaces do not exist in isolation, the surrounding contextsin which these interfaces exist in are identified. Finally the information processed by theseinterfaces and their contexts are analyzed. Once the information processed by the iden-tified interfaces in their respective contexts are analysed, a risk assessment is performedbased on this information.
The methodology is evaluated by performing an analysis of the IKEA “TRÅDFRI”smart lighting platform. By analysing the firmware update process of the IKEA “TRÅD-FRI” platform it can be concluded that the developers have used standardized protocolsand standardized cryptographic algorithms and use these to protect devices from ma-licious firmware. The analysis does however find some vulnerabilities, even though thedevelopers have actively taken steps to protect the system.
@mastersthesis{diva2:1362068,
author = {Szreder, Mikael},
title = {{IoT Security in Practice:
A Computer Security Analysis of the IKEA ``TRÅDFRI'' Platform}},
school = {Linköping University},
type = {{LiTH-ISY-EX--19/5257--SE}},
year = {2019},
address = {Sweden},
}
With the increasing size of 3D environments manual modelling becomes a more and more difficult task to perform, while retaining variety in the assets. The use of procedural generation is a well-established procedure within the field today. There have been multiple works presented within the field before, but many of them only focus on certain parts of the process.
In this thesis a system is presented for procedurally generating complete build- ings, with an interior. Evaluation has shown that the developed system is compa- rable to existing systems, both in terms of performance and level of detail. The resulted buildings could be utilized in real time environments, such as computer games, where enterable buildings often are a requirement for making the envi- ronment feel alive.
@mastersthesis{diva2:1337726,
author = {Sebastian, Andersson},
title = {{Detailed Procedurally Generated Buildings}},
school = {Linköping University},
type = {{LiTH-ISY-EX--19/5215--SE}},
year = {2019},
address = {Sweden},
}
Deformation of terrain is a natural component in real life. A car driving over a muddy area creates deep trails as the mud gives room to the tires. A person running across a snow-covered field causes the snow to deform to the shape of the feet. However, when these types of interactions between terrain and objects are modelled and rendered in real-time computer graphics applications, cheap approximations such as texture splatting is commonly used. This lack of realism not only looks poor to the viewer, it can also cause information to get lost and be a barrier to immersion.
This thesis proposes an efficient system for permanent terrain deformations in real-time. In a volume-preserving manner, the ground material is displaced and animated as objects of arbitrary shapes intersect the terrain. Recent features of GPUs are taken advantage of to achieve high enough performance for the system to be used in real-time applications such as games and simulators.
@mastersthesis{diva2:1333403,
author = {Persson, Jesper},
title = {{Volume-Preserving Deformation of Terrain in Real-Time}},
school = {Linköping University},
type = {{LiTH-ISY-EX-ET--19/5207--SE}},
year = {2019},
address = {Sweden},
}
Volume visualization has been made available on the web using the Direct Volume Rendering (DVR) technique, powered by the WebGL 1 API. While the technique produces visually pleasing output, the performance of the prototypes that implement this leave much desired. 2017 saw the release of the next version of WebGL, WebGL 2.0 and the introduction of WebAsssembly. These APIs and formats are promising tools for formulating a DVR application that can do high performance rendering at interactive frame rates.
This thesis investigates, implements and evaluates a prototype application that utilizes the optimization methods of Adaptive Texture Maps, Octree Empty Space Skipping and Distance Transform Empty Space Skipping. The Distance Transform is further evaluated by a CPU bound and a GPU bound algorithm implementation. The techniques are assessed on readily available off the shelf devices and hardware. The performance of the prototype application ran on these devices is quantified by measuring computation times of costly operations, and measuring frames per second.
It is concluded that for different hardware, the methods have different properties. While higher FPS is achieved for all devices by utilizing some combination of the optimization methods, the distance transform is the most consistent. A discussion on embedded devices and their quirks is also held, where memory constraints and the resolution of the data is of greater importance than on the non-embedded devices. This results in some suggested actions that can be taken to also potentially enable high-performance rendering of higher resolution data on these devices.
@mastersthesis{diva2:1330460,
author = {Nilsson, Tobias},
title = {{Optimization Methods for Direct Volume Rendering on the Client Side Web}},
school = {Linköping University},
type = {{LiTH-ISY-EX-ET--19/5204--SE}},
year = {2019},
address = {Sweden},
}
The Signal protocol can be considered state-of-the-art when it comes to secure messaging, but advances in quantum computing stress the importance of finding post-quantum resistant alternatives to its asymmetric cryptographic primitives.
The aim is to determine whether existing post-quantum cryptography can be used as a drop-in replacement for the public-key cryptography currently used in the Signal protocol and what the performance trade-offs may be.
An implementation of the Signal protocol using commutative supersingular isogeny Diffie-Hellman (CSIDH) key exchange operations in place of elliptic-curve Diffie-Hellman (ECDH) is proposed. The benchmark results on a Samsung Galaxy Note 8 mobile device equipped with a 64-bit Samsung Exynos 9 (8895) octa-core CPU shows that it takes roughly 8 seconds to initialize a session using CSIDH-512 and over 40 seconds using CSIDH-1024, without platform specific optimization.
To the best of our knowledge, the proposed implementation is the first post-quantum resistant Signal protocol implementation and the first evaluation of using CSIDH as a drop-in replacement for ECDH in a communication protocol.
@mastersthesis{diva2:1331520,
author = {Alvila, Markus},
title = {{A Performance Evaluation of Post-Quantum Cryptography in the Signal Protocol}},
school = {Linköping University},
type = {{LITH-ISY-EX--19/5211--SE}},
year = {2019},
address = {Sweden},
}
This work is about TEMPEST (Transient Electro-Magnetic Pulse Emanation Standard) which is a term for describing the gathering of secret information that leak from a system. The specific observed system is the touchscreen of a smartphone.
The aim is to firstly examine the information needed for an identification of the touched area on a touchscreen of a smartphone by TEMPEST. Given this information, the work then examines the accuracy of such an identification. The method is based on experimental examinations, observations and measures as well as probability distributions.
Conclusions from the results are the fact that the accuracy of an identification becomes higher when synchronizing the communication between the smartphone and its display with a measuring instrument. Moreover, the accuracy of an identification varies with chosen measurement method and this work showed that a higher accuracy was achieved when taking measurements with an absorbing clamp compared to a near-field probe.
@mastersthesis{diva2:1330301,
author = {Celik, Hakan},
title = {{Detektering och analys av röjande signaler från pekskärmar}},
school = {Linköping University},
type = {{LiTH-ISY-EX--19/5212--SE}},
year = {2019},
address = {Sweden},
}
This thesis examines some of the currently available programs for password guessing, in terms of designs and strengths. The programs Hashcat, OMEN, PassGAN, PCFG and PRINCE were tested for effectiveness, in a series of experiments similar to real-world attack scenarios. Those programs, as well as the program TarGuess, also had their design examined, in terms of the extent of how they use different important parameters. It was determined that most of the programs use different models to deal with password lists, in order to learn how new, similar, passwords should be generated. Hashcat, PCFG and PRINCE were found to be the most effective programs in the experiments, in terms of number of correct password guessed each second. Finally, a program for automated password guessing based on the results was built and implemented in the cyber range at the Swedish defence research agency.
@mastersthesis{diva2:1325687,
author = {Lundberg, Tobias},
title = {{Comparison of Automated Password Guessing Strategies}},
school = {Linköping University},
type = {{LiTH-ISY-EX--19/5213--SE}},
year = {2019},
address = {Sweden},
}
A 3D-model consists out of triangles, and in many cases, the amount of triangles are unnecessarily large for the application of the model. If the camera is far away from a model, why should all triangles be there when in reality it would make sense to only show the contour of the model? Mesh decimation is often used to solve this problem, and its goal is to minimize the amount of triangles while still keep the visual representation intact. Having the decimation algorithm being structure aware, i.e. having the algorithm aware of where the important parts of the model are, such as corners, is of great benefit when doing extreme simplification. The algorithm can then decimate large, almost planar parts, to only a few triangles while keeping the important features detailed. This thesis aims to describe the development of a structure aware decimation algorithm for the company Spotscale, a company specialized in creating 3D-models of drone footage.
@mastersthesis{diva2:1320121,
author = {Böök, Daniel},
title = {{Make it Simpler:
Structure-aware mesh decimation of large scale models}},
school = {Linköping University},
type = {{LiTH-ISY-EX--19/5192--SE}},
year = {2019},
address = {Sweden},
}
In a short time touchscreens has become one of the most used methods for input to smartphones and other machines such as cash registers, card terminals and ATMs. While the technology change was quick it introduces the possibility of new security holes. Compromising emanations is a possible security hole in almost all electronic equipment. These emanations can be used in a side-channel attack if they leak information that compromise the security of the device. This thesis studies a single-board computer (SBC) with a touchscreen and a smartphone in order to evaluate if any usable information leaks regarding what is done on the touchscreen i.e. where on the screen a user touches. It is shown that the location of a touch can be read out from information leaking through the power cable and wirelessly from the single-board computer. It is also shown that basic information can be read out wirelessly from the smartphone but further testing is required to evaluate the possibility to extract usable information from the device.
@mastersthesis{diva2:1323262,
author = {Lidstedt, Joakim},
title = {{Evaluating Compromising Emanations in Touchscreens}},
school = {Linköping University},
type = {{LiTH-ISY-EX--19/5217--SE}},
year = {2019},
address = {Sweden},
}
Contemporary societies have become completely dependent on industrial control systems, which are responsible for many critical infrastructures taken for granted like water and electricity. Since these systems nowadays are utilizing distribution and interoperability involving routing data over public Internet, the need for countering cyber threats are emerging. This thesis pro-poses a novel approach for anomaly detection suitable for supervisory control and data acquisition (SCADA) systems. The anomaly detection model utilizes data recorded in a industrial control system and applies machine learning algorithms (random forest, multidimensional scaling plot) in order to capture a baseline profile of the data. An important aspect of the model is that it, by extracting higher level features, is capable of finding anomalies in traffic volume patterns within the network which complements the packet-level anomaly detection. Leveraging on the fundamental strengths of machine learning, the proposed model could be an important tool for enhancing security in these critical systems.
@mastersthesis{diva2:1390689,
author = {Qvigstad, Magnus},
title = {{A Novel Approach to Machine-Learning-Based Intrusion Detection in SCADA Networks}},
school = {Linköping University},
type = {{LiTH-ISY-EX--18/5414--SE}},
year = {2018},
address = {Sweden},
}
To decrease the rendering time of a mesh, Level of Detail can be generated by reducing the number of polygons based on some geometrical error. While this works well for most meshes, it is not suitable for meshes with an associated texture atlas. By iteratively collapsing edges based on an extended version of Quadric Error Metric taking both spatial and texture coordinates into account, textured meshes can also be simplified. Results show that constraining edge collapses in the seams of a mesh give poor geometrical appearance when it is reduced to a few polygons. By allowing seam edge collapses and by using a pull-push algorithm to fill areas located outside the seam borders of the texture atlas, the appearance of the mesh is better preserved.
@mastersthesis{diva2:1262041,
author = {Hedin, Rasmus},
title = {{Evaluation of an Appearance-Preserving Mesh Simplification Scheme for CET Designer}},
school = {Linköping University},
type = {{LiTH-ISY-EX--18/5170--SE}},
year = {2018},
address = {Sweden},
}
The development of graphic processing units have during the last decade improved significantly in performance while at the same time becoming cheaper. This has developed a new type of usage of the device where the massive parallelism available in modern GPU’s are used for more general purpose computing, also known as GPGPU. Frameworks have been developed just for this purpose and some of the most popular are CUDA, OpenCL and DirectX Compute Shaders, also known as DirectCompute. The choice of what framework to use may depend on factors such as features, portability and framework complexity. This paper aims to evaluate these concepts, while also comparing the speedup of a parallel implementation of the N-Body problem with Barnes-hut optimization, compared to a sequential implementation.
@mastersthesis{diva2:1239545,
author = {Söderström, Adam},
title = {{A Qualitative Comparison Study Between Common GPGPU Frameworks}},
school = {Linköping University},
type = {{}},
year = {2018},
address = {Sweden},
}
When distributing multiple TV programs on a fixed bandwidth channel, the bit rate of each video stream is often constant. Since video sent at a constant quality is typically wildly varying, this is a very unoptimal solution. By instead sharing the total bit rate among all programs, the video quality can be increased by allocating bit rate where it is needed. This thesis explores the statistical multiplexing problem for a specific hardware platform with the limitations and advantages of that platform. A solution for statistical multiplexing is proposed and evaluated using the major codecs used for TV distribution today. The main advantage of the statistical multiplexer is a lot more even quality and a higher minimum quality achieved across all streams. While the solution will need a faster method for bit rate approximation for a more practical solution in terms of performance, the solution is shown to work as intended.
@mastersthesis{diva2:1237211,
author = {Halld\'{e}n, Max},
title = {{Statistical Multiplexing of Video for Fixed Bandwidth Distribution:
A multi-codec implementation and evaluation using a high-level media processing library}},
school = {Linköping University},
type = {{LiTH-ISY-EX--18/5142--SE}},
year = {2018},
address = {Sweden},
}
In computer graphics it is often necessary to construct a large number of objectsof specific types, such as buildings. One approach is to create the models procedurally,an approach that often renders function and appearance tightly coupled.
This thesis explores an alternate solution to this problem. We propose a systemfor procedural building generation based on the separation of function andstyle. We show our approach to separating appearance from functionality, wethen describe our implementation of the system and finally we create a demonstrationof its potential.
Our approach offers a large amount of control while allowing for a separationbetween design of functionality and design of style. The approach could intheory allow for reuse of large databases of models and simplify the creation ofprocedural generation engines
@mastersthesis{diva2:1199250,
author = {Pessa, Mikael},
title = {{Functionality-Independent Style-Based Procedural Building Generation}},
school = {Linköping University},
type = {{LiTH-ISY-EX--17/5099--SE}},
year = {2017},
address = {Sweden},
}
Smartphone's are extremely popular and in high demand nowadays. They are easy to handle and very intuitive compared with old phones for end users. Approximately two billion people use Smartphones all over the world, so it is clear that these phones are very popular. One of the major issues of these smart phones is theft. What happens if someone steals your phone? Why should we try to secure our phones? The reason is that, even if the phone is stolen, the thief should not be able to open and use it through unlocking easily. People are generally careless while typing their password/pin code or drawing a pattern while others are watching. Maybe someone can see it just by standing next to or behind the person who is typing the pin or drawing the pattern. This scenario of getting the information is called shoulder surfing. Another scenario is to use a hidden camera, so-called Record monitoring.
Shoulder surfing can be used by an attacker/observer to get passwords or PINs. Shoulder surfing is very easy to perform by just looking over the shoulder when a user is typing the PIN or drawing the unlock pattern. Record monitoring needs more preparation, but is not much more complicated to perform. Sometimes it also happens that the phone gets stolen and by seeing fingerprints or smudge patterns on the phone, the attacker can unlock it. These above two are general security threats for smart phone users. This thesis introduces some different approaches to overcome the above mentioned security threats in Smartphones. The basic aim is to make it more difficult to perform shoulder surfing or record monitoring, and these will not be easy to perform by the observer after switching to the new techniques introduced in the thesis.
In this thesis, the usability of each method developed will be described and also future use of these approaches. There are a number of techniques by which a user can protect the phone from observation attacks. Some of these will be considered, and a user interface evaluation will be performed in the later phase of development. I will also consider some important aspects while developing the methods such as -user friendliness, Good UI concepts etc. I will also evaluate the actual security added by the methods, and the overall user impression. Two separate user studies have been performed, first one with students from the Computer Science department, and then one with students from other departments. The results indicate that students from Computer Science are more attracted to the new security solution than students from other departments.
@mastersthesis{diva2:1150392,
author = {Haitham, Seror},
title = {{Design and Evaluation of Accelerometer Based User Authentication Methods}},
school = {Linköping University},
type = {{LiTH-ISY-EX--17/5088--SE}},
year = {2017},
address = {Sweden},
}
The attention for low latency computer vision and video processing applications are growing for every year, not least the VR and AR applications. In this thesis the Contrast Limited Adaptive Histogram Equalization (CLAHE) and Radial Dis- tortion algorithms are implemented using both CUDA and OpenCL to determine whether these type of algorithms are suitable for implementations aimed to run at GPUs when low latency is of utmost importance. The result is an implemen- tation of the block versions of the CLAHE algorithm which utilizes the built in interpolation hardware that resides on the GPU to reduce block effects and an im- plementation of the Radial Distortion algorithm that corrects a 1920x1080 frame in 0.3 ms. Further this thesis concludes that the GPU-platform might be a good choice if the data to be processed can be transferred to and possibly from the GPU fast enough and that the choice of compute API mostly is a matter of taste.
@mastersthesis{diva2:1150085,
author = {Tarassu, Jonas},
title = {{GPU-Accelerated Frame Pre-Processing for Use in Low Latency Computer Vision Applications}},
school = {Linköping University},
type = {{LiTH-ISY-EX--17/5090--SE}},
year = {2017},
address = {Sweden},
}
The Internet of Things (IoT) is seen as one of the next Internet revolutions. In a near future the majority of all connected devices to the Internet will be IoT devices. These devices will connect previously offline constrained systems, thus it is essential to ensure end-to-end security for such devices. Object Security is a concept where the actual packet or sensitive parts of the packet are encrypted instead of the radio channel. A compromised node in the network will with this mechanism still have the data encrypted ensuring full end-to-end security. This paper proposes an architecture for using the object security format COSE in a typical constrained short-range radio based IoT platform. The IoT platform utilizes Bluetooth Low Energy and the Constrained Application Protocol for data transmission via a capillary gateway. A proof-of-concept implementation based on the architecture validates that the security solution is implementable. An overhead comparison between current channel security guidelines and the proposed object security solution results in a similar size for each data packet.
The thesis concludes that object security should be seen as an alternative for ensuring end-to-end security for the Internet of Things.
@mastersthesis{diva2:1114835,
author = {Tjäder, Hampus},
title = {{End-to-end Security Enhancement of an IoT Platform Using Object Security}},
school = {Linköping University},
type = {{LiTH-ISY-EX--17/5053--SE}},
year = {2017},
address = {Sweden},
}
This thesis presents a usability evaluation of a non conventional way to visualize virtual objects in Augmented Reality (AR). The virtual objects consists of a window that can be placed on a real surface and then opened to view a virtual room containing some virtual objects. This concept was applied to a smartphone application created and designed to evaluate the usability of the concept.
The usability was evaluated with user testing methods designed to gather information about the interaction and the perception of handheld Augmented Reality (HAR) applications. The usefulness of the concept was also evaluated by designing the application to be intended for entertainment and then evaluating the users interest for such an application. Design guidelines and design patterns created for HAR applications was followed when designing the application.
The results of this project has shown that the concept is easy to both comprehend and interact with. This applies to users with previous smartphone experience but not much experience with AR. The results has also shown that smartphone users find an interest in using an application of this type. An application designed for entertainment and with the described concept.
The purpose of evaluating this concept has been to verify its usefulness to promote its use in AR applications. This would create a variety in the visualization of virtual objects in AR. It would also open up for new opportunities in the use of AR.
@mastersthesis{diva2:1116764,
author = {Lindqvist, David},
title = {{Augmented Reality and an Inside-Object-View Concept:
A Usability Evaluation}},
school = {Linköping University},
type = {{LiTH-ISY-EX--17/5046--SE}},
year = {2017},
address = {Sweden},
}
Particle systems are used to create visual effects in real-time applications such as computer games. However, emitted particles are often transient and do not leave a lasting impact on a 3D scene. This thesis work presents a real-time method that enables GPU particle systems to paint meshes in a 3D scene as the result of particle collisions, thus adding detail to and leaving a lasting impact on a scene. The method uses screen space collision detection and a mapping from screen space to texture space of meshes to determine where to apply paint. The method was tested for its time complexity and how well it performed in scenarios similar to those found in computer games. The results shows that the method probably can be used in computer games. Performance and visual fidelity of the paint application is not directly dependent on the amount of simulated particles, but depends only on the complexity of the meshes and their texture mapping as wellas the resolution of the paint. It is concluded that the method is renderer agnostic and could be added to existing GPU particle systems and that other types of effects than those showed in the thesis could be achieved by using the method.
@mastersthesis{diva2:1107823,
author = {Larsson, Andreas},
title = {{Real-Time Persistent Mesh Painting with GPU Particle Systems}},
school = {Linköping University},
type = {{LiTH-ISY-EX--17/5027--SE}},
year = {2017},
address = {Sweden},
}
Virtuell verklighet (VR) har under de senaste åren fått en uppsving i popularitet. Rörelsesjuka i VR har länge varit ett problem och är idag fortfarande ett stort hinder för kommersiell succé. Detta arbete ämnar till att implementera stöd för Oculus Rift i programmet Configura, och utvärdera navigering med handkontroll i en VR-miljö. Fokuset ligger på att hitta lämpliga gång- och rotationshastigheter för effektiv navigering med handkontroll, och effekten hastigheterna har på rö- relsesjuka. En användarstudie genomfördes där personer testade olika gång- och rotationshastigheter i olika tester med ökande svårighetsgrad i navigering. Resultaten från användarstudien visar på att i alla hastigheter upplevde testpersonerna allvarliga symptom av rörelsesjuka. Det fanns även indikationer att användare med lägre hastigheter presterar bättre.
@mastersthesis{diva2:1083448,
author = {Wikström, Sebastian},
title = {{Gång- och rotationshastigheter för effektiv navigering i VR}},
school = {Linköping University},
type = {{LiTH-ISY-EX-ET--17/0464--SE}},
year = {2017},
address = {Sweden},
}
This thesis explores Voxel Cone Tracing as a possible Global Illumination solutionon mobile devices.The rapid increase of performance on low-power graphics processors hasmade a big impact. More advanced computer graphics algorithms are now possi-ble on a new range of devices. One category of such algorithms is Global Illumi-nation, which calculates realistic lighting in rendered scenes. The combinationof advanced graphics and portability is of special interest to implement in newtechnologies like Virtual Reality.The result of this thesis shows that while possible to implement a state of theart Global Illumination algorithm, the performance of mobile Graphics Process-ing Units is still not enough to make it usable in real-time.
@mastersthesis{diva2:1148572,
author = {Wahl\'{e}n, Conrad},
title = {{Global Illumination in Real-Time using Voxel Cone Tracing on Mobile Devices}},
school = {Linköping University},
type = {{LiTH-ISY-EX--16/5011--SE}},
year = {2016},
address = {Sweden},
}
Today processor development has a lot of focus on parallel performance by providing multiple cores that programs can use. The problem with the current version of OpenGL is that it lacks support for utilizing multiple CPU threads for calling rendering commands. Vulkan is a new low level graphics API that gives more control to the developers and provides tools to properly utilize multiple threads for doing rendering operations in parallel. This should give increased performance in situations where the CPU is limiting the performance of the application and the goal of this report is to evaluate how large these performance gains can be in different scenes. To do this evaluation a test program is written with both Vulkan and OpenGL implementations and by rendering the same scene using different APIs and techniques the performance can be compared. In addition to evaluating the multithreaded rendering performance the new explicit pipelines in Vulkan is also evaluated.
@mastersthesis{diva2:1037368,
author = {Blackert, Axel},
title = {{Evaluation of Multi-Threading in Vulkan}},
school = {Linköping University},
type = {{LiTH-ISY-EX-ET--16/0458--SE}},
year = {2016},
address = {Sweden},
}
This is a study taking an information theoretic approach toward quantum contextuality. The approach is that of using the memory complexity of finite-state machines to quantify quantum contextuality. These machines simulate the outcome behaviour of sequential measurements on systems of quantum bits as predicted by quantum mechanics. Of interest is the question of whether or not classical representations by finite-state machines are able to effectively represent the state-independent contextual outcome behaviour. Here we consider spatial efficiency, rather than temporal efficiency as considered by D. Gottesman (1999), for the particular measurement dynamics in systems of quantum bits. Extensions of cases found in the adjacent study of Kleinmann et al. (2010) are established by which upper bounds on memory complexity for particular scenarios are found. Furthermore, an optimal machine structure for simulating any n-partite system of quantum bits is found, by which a lower bound for the memory complexity is found for each n in the natural numbers. Within this finite-state machine approach questions of foundational concerns on quantum mechanics were sought to be addressed. Alas, nothing of novel thought on such concerns is here reported on.
@mastersthesis{diva2:957788,
author = {Harrysson, Patrik},
title = {{Memory Cost of Quantum Contextuality}},
school = {Linköping University},
type = {{LiTH-ISY-EX--16/4967--SE}},
year = {2016},
address = {Sweden},
}
Viewshed refers to the calculation and visualisation of what part of a terrain isvisible from a given observer point. It is used within many fields, such as militaryplanning or telecommunication tower placement. So far, no general fast methodsexist for calculating the viewshed for multiple observers that may for instancerepresent a road within the terrain. Additionally, if the terrain contains over-lapping structures such as man-made constructions like bridges, most currentviewshed algorithms fail. This report describes two novel methods for viewshedcalculation using multiple observers for terrain that may contain overlappingstructures. The methods have been developed at Vricon in Linköping as a Mas-ter’s Thesis project. Both methods are implemented using the graphics program-ming unit and the OpenGL graphics library, using a computer graphics approach.Results are presented in the form of figures and images, as well as running timetables using two different test setups. Lastly, future possible improvements arealso discussed. The results show that the first method is a viable real-time solu-tion and that the second method requires some additional work.
@mastersthesis{diva2:954165,
author = {Christoph, Heilmair},
title = {{GPU-Based Visualisation of Viewshed from Roads or Areas in a 3D Environment}},
school = {Linköping University},
type = {{LiTH-ISY-EX--16/4951--SE}},
year = {2016},
address = {Sweden},
}
For synthetic data generation with concave collision objects, two physics simu- lations techniques are investigated; convex decomposition of mesh models for globally concave collision results, used with the physics simulation library Bullet, and a GPU implemented rigid body solver using spherical decomposition and impulse based physics with a spatial sorting-based collision detection.
Using the GPU solution for rigid body physics suggested in the thesis scenes con- taining large amounts of bodies results in a rigid body simulation up to 2 times faster than Bullet 2.83.
@mastersthesis{diva2:943995,
author = {Edhammer, Jens},
title = {{Rigid Body Physics for Synthetic Data Generation}},
school = {Linköping University},
type = {{LiTH-ISY-EX--16/4958--SE}},
year = {2016},
address = {Sweden},
}
More and more sensitive information is communicated digitally and with thatcomes the demand for security and privacy on the services being used. An accurateQoS metric for these services are of interest both for the customer and theservice provider. This thesis has investigated the impact of different parameterson the perceived voice quality for encrypted VoIP using a PESQ score as referencevalue. Based on this investigation a parametric prediction model has been developedwhich outputs a R-value, comparable to that of the widely used E-modelfrom ITU. This thesis can further be seen as a template for how to construct modelsof other equipments or codecs than those evaluated here since they effect theresult but are hard to parametrise.
The results of the investigation are consistent with previous studies regarding theimpact of packet loss, the impact of jitter is shown to be significant over 40 ms.The results from three different packetizers are presented which illustrates theneed to take such aspects into consideration when constructing a model to predictvoice quality. The model derived from the investigation performs well withno mean error and a standard deviation of the error of a mere 1:45 R-value unitswhen validated in conditions to be expected in GSM networks. When validatedagainst an emulated 3G network the standard deviation is even lower.v
@mastersthesis{diva2:934326,
author = {Andersson, Martin},
title = {{Parametric Prediction Model for Perceived Voice Quality in Secure VoIP}},
school = {Linköping University},
type = {{LiTH-ISY-EX--16/4940--SE}},
year = {2016},
address = {Sweden},
}
In today’s online world it is important to protect your organization’s valuable information and assets. Information can be stolen or destroyed in many different ways, and it needs to be dealt with not only on a technical level, but also on a management level. However, the current methods are not very intuitive and require a lot of familiarity with information security management. This report explores how planning of information security within an organization can instead be accomplished in a simple and pragmatic manner, without discouraging the user with too much information and making it too complicated. This is done by examining the requirements and controls from the ISO 27000 framework, and with those in regard creating a method that’s more useful, intuitive, and easy to follow.
@mastersthesis{diva2:925523,
author = {Eriksson, Carl-Henrik},
title = {{Standardiserad informationssäkerhet inom systemutveckling:
En pragmatisk metod för uppehållande av en hög standard med ramverket ISO 27000}},
school = {Linköping University},
type = {{LiTH-ISY-EX--16/4932--SE}},
year = {2016},
address = {Sweden},
}
The computational capacity of graphics cards for general-purpose computinghave progressed fast over the last decade. A major reason is computational heavycomputer games, where standard of performance and high quality graphics constantlyrise. Another reason is better suitable technologies for programming thegraphics cards. Combined, the product is high raw performance devices andmeans to access that performance. This thesis investigates some of the currenttechnologies for general-purpose computing on graphics processing units. Technologiesare primarily compared by means of benchmarking performance andsecondarily by factors concerning programming and implementation. The choiceof technology can have a large impact on performance. The benchmark applicationfound the difference in execution time of the fastest technology, CUDA, comparedto the slowest, OpenCL, to be twice a factor of two. The benchmark applicationalso found out that the older technologies, OpenGL and DirectX, are competitivewith CUDA and OpenCL in terms of resulting raw performance.
@mastersthesis{diva2:909410,
author = {Sörman, Torbjörn},
title = {{Comparison of Technologies for General-Purpose Computing on Graphics Processing Units}},
school = {Linköping University},
type = {{LiTH-ISY-EX--16/4923--SE}},
year = {2016},
address = {Sweden},
}
Virtual reality is a concept that has existed for some time but the recent advances in the performance of commercial computers has led the development of different commercial head mounted displays, for example the Oculus Rift. With this growing interest in virtual reality, it is important to evaluate existing techniques used when designing user interfaces. In addition, it is also important to develop new techniques to be able to give the user the best experience when using virtual reality applications.
This thesis investigates the design of a user interface for a virtual reality using Oculus Rift combined with the Razer Hydra and Leap Motion as input devices. A set of different graphical user interface components were developed and, together with the different input devices, evaluated with a user test to try to determine their advantages. During the implementation of the project the importance of giving the user feedback was shown. Adding both visual and aural feedback when interacting with the GUI increases the usability of the system.
According to the conducted user test, people preferred using the Leap Motion even if it was not the easiest input device to use. It also showed that the current implementation of input devices was not precise enough to be able to draw conclusions about the different user interface components.
@mastersthesis{diva2:898394,
author = {Silverhav, Robin},
title = {{Design of a Graphical User Interface for Virtual Reality with Oculus Rift}},
school = {Linköping University},
type = {{LiTH-ISY-EX--15/4910--SE}},
year = {2015},
address = {Sweden},
}
This thesis examined how virtual reality could be used in interior design. The thesis was limited to virtual reality experienced using a head mounted display. The Method was to integrate virtual reality into an existing interior design software called CET Designer. After investigating the available commercial virtual reality hardware and software Oculus SDK and OpenVR was chosen. Unity 3D was used as a prototyping tool for experimenting with different interaction and navigation methods. An user study with 14 participants was performed. It compared four different navigation methods. First person shooter style controls using a gamepad was proven to be the best one. It can also be concluded that having a bad navigation style could decreased the user experience in virtual reality and cause motion sickness.
@mastersthesis{diva2:875094,
author = {Tingvall, Jesper},
title = {{Interior Design and Navigation in Virtual Reality}},
school = {Linköping University},
type = {{LiTH-ISY-EX--15/4902--SE}},
year = {2015},
address = {Sweden},
}
Distance field text rendering has many advantages compared to most other text renderingsolutions. Two of the advantages are the possibility to scale the glyphs without losing the crisp edge and less memory consumption. A drawback with distance field text renderingcan be high distance field generation time. The solution for fast distance field text renderingin this thesis generates the distance fields by drawing distance gradients locally over the outlines of the glyphs. This method is much faster than the old exact methods for generating distance fields that often includes multiple passes over the whole image.
Using the solution for text rendering proposed in this thesis results in good looking text that is generated on the fly. The distance fields are generated on a mobile device in less than 10 ms for most of the glyphs in good quality which is less than the time between two frames.
@mastersthesis{diva2:857060,
author = {Adamsson, Gustav},
title = {{Fast and Approximate Text Rendering Using Distance Fields}},
school = {Linköping University},
type = {{LiTH-ISY-EX--15/4868--SE}},
year = {2015},
address = {Sweden},
}
The Advanced Encryption Standard is one of the most common encryption algorithms. It is highly resistant to mathematical and statistical attacks, however, this security is based on the assumption that an adversary cannot access the algorithm’s internal state during encryption or decryption. Power analysis is a type of side-channel analysis that exploit information leakage through the power consumption of physical realisations of cryptographic systems. Power analysis attacks capture intermediate results during AES execution, which combined with knowledge of the plaintext or the ciphertext can reveal key material. This thesis studies and compares simple power analysis, differential power analysis and template attacks using a cheap consumer oscilloscope against AES-128 implemented on an 8-bit microcontroller. Additionally, the shuffling and masking countermeasures are evaluated in terms of security and performance. The thesis also presents a practical approach to template building and device characterisation. The results show that attacking a naive implementation with differential power analysis requires little effort, both in preparation and computation time. Template attacks require the least amount of measurements but requires significant preparation. Simple power analysis by itself cannot break the key but proves helpful in simplifying the other attacks. It is found that shuffling significantly increases the number of traces required to break the key while masking forces the attacker to use higher-order techniques.
@mastersthesis{diva2:874463,
author = {Fransson, Mattias},
title = {{Power Analysis of the Advanced Encryption Standard:
Attacks and Countermeasures for 8-bit Microcontrollers}},
school = {Linköping University},
type = {{LiTH-ISY-EX--15/4907--SE}},
year = {2015},
address = {Sweden},
}
TLS is a vital protocol used to secure communication over networks and it provides an end- to-end encrypted channel between two directly communicating parties. In certain situations it is not possible, or desirable, to establish direct connections from a client to a server, as for example when connecting to a server located on a secure network behind a gateway. In these cases chained connections are required.
Mutual authentication and end-to-end encryption are important capabilities in a high assur- ance environment. These are provided by TLS, but there are no known solutions for chained connections.
This thesis explores multiple methods that provides the functionality for chained connec- tions using TLS in a high assurance environment with trusted servers and a public key in- frastructure. A number of methods are formally described and analysed according to multi- ple criteria reflecting both functionality and security requirements. Furthermore, the most promising method is implemented and tested in order to verify that the method is viable in a real-life environment.
The proposed solution modifies the TLS protocol through the use of an extension which allows for the distinction between direct and chained connections. The extension which also allows for specifying the structure of chained connections is used in the implementation of a method that creates chained connections by layering TLS connections inside each other. Testing demonstrates that the overhead of the method is negligible and that the method is a viable solution for creating chained connections with mutual authentication using TLS.
@mastersthesis{diva2:840363,
author = {Petersson, Jakob},
title = {{Analysis of Methods for Chained Connections with Mutual Authentication Using TLS}},
school = {Linköping University},
type = {{LiTH-ISY-EX--15/4873--SE}},
year = {2015},
address = {Sweden},
}
Random number generators are basic building blocks of modern cryptographic systems. Usually pseudo random number generators, carefully constructed deter- ministic algorithms that generate seemingly random numbers, are used. These are built upon foundations of thorough mathematical analysis and have been subjected to stringent testing to make sure that they can produce pseudo random sequences at a high bit-rate with good statistical properties.
A pseudo random number generator must be initiated with a starting value. Since they are deterministic, the same starting value used twice on the same pseudo random number generator will produce the same seemingly random sequence. Therefore it is of utmost importance that the starting value contains enough en- tropy so that the output cannot be predicted or reproduced in an attack. To gen- erate a high entropy starting value, a true random number generator that uses sampling of some physical non-deterministic phenomenon to generate entropy, can be used. These are generally slower than their pseudo random counterparts but in turn need not generate the same amount of random values.
In field programmable gate arrays (FPGA), generating random numbers is not trivial since they are built upon digital logic. A popular technique to generate entropy within a FPGA is to sample jittery clock signals. A quite recent technique proposed to create a robust clock signals, that contains such jitter, is to use self- timed ring oscillators. These are structures in which several events can propagate freely at an evenly spaced phase distribution.
In this thesis self-timed rings of six different lengths is implemented on a spe- cific FPGA hardware. The different implementations are tested with the TestU01 test suite. The results show that two of the implementations have a good oscilla- tory behaviour that is well suited for use as random number generators. Others exhibit unexpected behaviours that are not suited to be used in a random num- ber generator. Two of the implemented random generators passed all tests in the TestU01 batteries Alphabit and BlockAlphabit. One of the generators was deemed not fit for use in a random number generator after failing all of the tests. The last three were not subjected to any tests since they did not behave as ex- pected.
@mastersthesis{diva2:826555,
author = {Einar, Marcus},
title = {{Implementing and Testing Self-Timed Rings on a FPGA as Entropy Sources}},
school = {Linköping University},
type = {{LiTH-ISY-EX--15/4845--SE}},
year = {2015},
address = {Sweden},
}
Ultra low-powers wireless technology sensors uses when devices is used to consumeslow power. ANT+ sensors can run for years on a single coin battery. In thethesis the ANT+ sensor data is used in an application that can store and visualizethe data.
@mastersthesis{diva2:827000,
author = {Ericsson, Marcus},
title = {{Transmission, Storage, and Visualization of Data with ANT+}},
school = {Linköping University},
type = {{LiTH-ISY-EX--15/4874--SE}},
year = {2015},
address = {Sweden},
}
Nowadays, optical fiber is widely used in several areas, especially in communication networking. The main reason is that optical fiber has low attenuation and high bandwidth. However, the switching functionality is performed in the electrical domain (inside the router), thus we have transmission delays in the network lanes. In this study we explore the possibility of developing a hardware “plug-in” that can be connected in parallel with routers of the network enabling the router with “plug-in” to let it bypass time-critical traffic. We researched different switching techniques for optical fibers and realized it would be an expensive endeavor to create one for a large number of wavelength/connections, thus, we scaled it down to prove the concept “plug-in” where we use fiber optical switches and Mux/Demuxes for our design.
With our chosen optical components, we were able to bypass the routers (layer 3 switches) in our test environment and switch between different users to chose which one has the direct link. The conclusion can be drawn that it is possible to create such a “plug-in” which could be used by ISPs, to provide a faster lane to consumers with less modification of existing networks.
@mastersthesis{diva2:823675,
author = {Kronstrand, Alexander and Holmqvist, Andreas},
title = {{Overlay Network}},
school = {Linköping University},
type = {{LiTH-ISY-EX-ET--15/0441--SE}},
year = {2015},
address = {Sweden},
}
The number of connected devices connected to the Internet is growing rapidly. When talking about devices it also covers the ones not having any contact with humans. This type of devices are the ones that are expected to increase the most. That is why the field of device fingerprinting is an area that requires further investigation. This thesis measures and evaluates the accelerometer, camera and gyroscope sensor of a mobile device to the use as device fingerprinting. The method used is based on previous research in sensor identification together with methods used for designing a biometric system. The combination with long-proven methods in the biometric area with new research of sensor identification is a new approach of looking at device fingerprinting.
@mastersthesis{diva2:823010,
author = {Karlsson, Anna},
title = {{Device Sensor Fingerprinting:
Mobile Device Sensor Fingerprinting With A Biometric Approach}},
school = {Linköping University},
type = {{LiTH-ISY-EX--15/4838--SE}},
year = {2015},
address = {Sweden},
}
Parallelization is the answer to the ever-growing demands of computing power by taking advantage of multi-core processor technology and modern many-core graphics compute units. Multi-core CPUs and many-core GPUs have the potential to substantially reduce the execution time of a program but it is often a challenging task to ensure that all available hardware is utilized. OpenMP and OpenCL are two parallel programming frameworks that have been developed to allow programmers to focus on high-level parallelism rather than dealing with low-level thread creation and management. This thesis applies these frameworks to the area of computed tomography by parallelizing the image reconstruction algorithm DIRA and the photon transport simulation toolkit CTmod. DIRA is a model-based iterative reconstruction algorithm in dual-energy computed tomography, which has the potential to improve the accuracy of dose planning in radiation therapy. CTmod is a toolkit for simulating primary and scatter projections in computed tomography to optimize scanner design and image reconstruction algorithms. The results presented in this thesis show that parallelization combined with computational optimization substantially decreased execution times of these codes. For DIRA the execution time was reduced from two minutes to just eight seconds when using four iterations and a 16-core CPU so a speedup of 15 was achieved. CTmod produced similar results with a speedup of 14 when using a 16-core CPU. The results also showed that for these particular problems GPU computing was not the best solution.
@mastersthesis{diva2:819916,
author = {Örtenberg, Alexander},
title = {{Parallelization of DIRA and CTmod Using OpenMP and OpenCL}},
school = {Linköping University},
type = {{LiTH-ISY-EX--15/4834--SE}},
year = {2015},
address = {Sweden},
}
Providing destruction in games is usually achieved by having pre-calculated fracturingpoints, swapping models at appropriate times while hiding the crimes witha puff of smoke or an explosion. An area of continued research is procedural destructionwhere an object will fracture in a realistic way depending on appliedforces such as gravity, explosions or load.
This thesis proposes and begins the implementation for a triangle based surfacerepresentation capable of supporting procedural destruction in real-time for anunderlying point-based simulation; deriving the methodology from the paper byM. Pauly et al [12].
Too wide project scoping prevented the implementation from fully realising theinitial goals; where the surface and physics simulations was never married intoa single simulation. It is one half of a larger project on procedural destruction,focusing primarily on the surface representation where the second half is detailedin the report by C. Stegmayr [14].
Even without a complete simulation, performance is an evidently limiting factor.For more detailed simulations, with a simple test mesh and a small step size whenpropagating a fracture, frame times quickly raise up to almost 247 ms/frame.There are multiple areas of improvement for the implementation to reduce frametimes; however, scalability and performance are still major points of concern dueto inherent challenges with running multiple fractures in parallel. Unless scalingcan be improved, it is worth pursuing alternative approaches.
@mastersthesis{diva2:819020,
author = {Lindmark, Jonas},
title = {{Fracturable Surface Model for Particle-based Simulations}},
school = {Linköping University},
type = {{LITH-ISY-EX--08/4083}},
year = {2015},
address = {Sweden},
}
Over the past 15 years, modern PC graphics cards (GPUs) have changed from being pure graphics accelerators into parallel computing platforms.Several new parallel programming languages have emerged, including NVIDIA's parallel programming language for GPUs (CUDA).
This report explores two related problems in parallel: How well-suited is CUDA for implementing algorithms that utilize non-trivial data structures?And, how does one develop a complex algorithm that uses a CUDA system efficiently?
A guide for how to implement complex algorithms in CUDA is presented. Simulation of a dense 2D particle system is chosen as the problem domain foralgorithm optimization. Two algorithmic optimization strategies are presented which reduce the computational workload when simulating theparticle system. The strategies can either be used independently, or combined for slightly improved results. Finally, the resultingimplementations are benchmarked against a simpler implementation on a normal PC processor (CPU) as well as a simpler GPU-algorithm.
A simple GPU solution is shown to run at least 10 times faster than a simple CPU solution. An improved GPU solution can thenyield another 10 times speed-up, while sacrificing some accuracy.
@mastersthesis{diva2:816727,
author = {Kalms, Mikael},
title = {{High-performance particle simulation using CUDA}},
school = {Linköping University},
type = {{LiTH-ISY-EX--15/4867--SE}},
year = {2015},
address = {Sweden},
}
Large data sets are difficult to visualize. For a human to find structures and understand the data, good visualization tools are required. In this project a technique will be developed that makes it possible for a user to look at complex data at different scales. This technique is obvious when viewing geographical data where zooming in and out gives a good feeling for the spatial relationships in map data or satellite images. However, for other types of data it is not obvious how much scaling should be done.
In this project, an experimental application is developed that visualizes data in multiple dimensions from a large news article database. Using this experimental application, the user can select multiple keywords on different axis and then can create a visualization containing news articles with those keywords.
The user is able to move around the visualization. If the camera is far away from the document icons then they are clustered using red coloured spheres. If the user moves the camera closer to the clusters they will pop up into single document icons. If the camera is very close to the document icons it is possible to read the news articles
@mastersthesis{diva2:816487,
author = {Åklint, Richard and Khan, Muhammad Farhan},
title = {{Multidimensional Visualization of News Articles}},
school = {Linköping University},
type = {{LiTH-ISY-EX--15/4830--SE}},
year = {2015},
address = {Sweden},
}
I saliv förekommer ett enzym, kallat alfa-amylas, som kan ge en fingervisning om en persons upplevda stressnivå. Förekomsten av alfa-amylas kan testas genom att låta droppa ett salivprov på ett särskilt slags filterpapper, som sedan färgas klarblått. Hastigheten på färgningen har visats kunna stå i förhållande till aktiviteten av alfa-amylas, och har tidigare mätts med fotodiod. Det här arbetet har undersökt den tekniska görbarheten i att använda en mobilbaserad lösning istället, detta för att öka tillgängligheten av att kunna utföra dessa mätningar. I arbetet har en lämplig mätmetod först sökts genom att analysera en datamängd med bildsekvenser tagna med mobilkamera. Mätmetoden har sedan implementerats i en mobilapplikation för Android. Lämpliga mått på färgavstånd och lämpligt mätområde har undersökts, samt hur yttre faktorer som ljusförhållanden och förflyttning av kameran kan påverka mätningar och hur dess effekter kan motverkas. Resultatet av arbetet har visat att en mobilapplikation mycket väl kan användas för att göra mätningar av en färgutveckling med konsekventa resultat, men att precisionen behöver utredas med kopplingen till alfa-amylas i åtanke. Avslutningsvis diskuteras implementationens brister, och konkreta förslag på vidareutvecklingar tas upp.
@mastersthesis{diva2:814607,
author = {Strid, Carl-Filip},
title = {{Mätning av färgutveckling med mobilapplikation som indikator på stressnivå}},
school = {Linköping University},
type = {{LiTH-ISY-EX-ET--15/0431--SE}},
year = {2015},
address = {Sweden},
}
Visual surveillance systems are increasingly common in our society today. There is a conflict between the demands for security of the public and the demands to preserve the personal integrity. This thesis suggests a solution in which parts of the surveillance images are covered in order to conceal the identities of persons appearing in video, but not their actions or activities. The covered parts could be encrypted and unlocked only by the police or another legal authority in case of a crime.
This thesis implements a proof-of-concept demonstrator using a combination of image processing techniques such as foreground segmentation, mathematical morphology, geometric camera calibration and region tracking.
The demonstrator is capable of tracking a moderate number of moving objects and conceal their identity by replacing them with a mask or a blurred image. Functionality for replaying recorded data and unlocking individual persons are included.
The concept demonstrator shows the chain from concealing the identities of persons to unlocking only a single person on recorded data. Evaluation on a publicly available dataset shows overall good performance.
@mastersthesis{diva2:814104,
author = {Fredrik, Hemström},
title = {{Privacy Protecting Surveillance: A Proof-of-Concept Demonstrator}},
school = {Linköping University},
type = {{LiTH-ISY-EX-07/3877-SE}},
year = {2015},
address = {Sweden},
}
Statistical random number testing is a well studied field focusing on pseudo-random number generators, that is to say algorithms that produce random-looking sequences of numbers. These generators tend to have certain kinds of flaws, which have been exploited through rigorous testing. Such testing has led to advancements, and today pseudo random number generators are both very high-speed and produce seemingly random numbers. Recent advancements in quantum physics have opened up new doors, where products called quantum random number generators that produce acclaimed true randomness have emerged.
Of course, scientists want to test such randomness, and turn to the old tests used for pseudo random number generators to do this. The main question this thesis seeks to answer is if publicly available such tests are good enough to evaluate a quantum random number generator. We also seek to compare sequences from such generators with those produced by state of the art pseudo random number generators, in an attempt to compare their quality.
Another potential problem with quantum random number generators is the possibility of them breaking without the user knowing. Such a breakdown could have dire consequences. For example, if such a generator were to control the output of a slot machine, an malfunction could cause the machine to generate double earnings for a player compared to what was planned. Thus, we look at the possibilities to implement live tests to quantum random number generators, and propose such tests.
Our study has covered six commonly available tools for random number testing, and we show that in particular one of these stands out in that it has a series of tests that fail our quantum random number generator as not random enough, despite passing an pseudo random number generator. This implies that the quantum random number generator behave differently from the pseudo random number ones, and that we need to think carefully about how we test, what we expect from an random sequence and what we want to use it for.
@mastersthesis{diva2:740158,
author = {Jakobsson, Krister Sune},
title = {{Theory, Methods and Tools for Statistical Testing of Pseudo and Quantum Random Number Generators}},
school = {Linköping University},
type = {{LiTH-ISY-EX--14/4790--SE}},
year = {2014},
address = {Sweden},
}
Video surveillance today can look very different depending on the objective and on the location where it is used. Some applications need a high image resolution and frame rate to carefully analyze the vision of a camera, while other applications could use a poorer resolution and a lower frame rate to achieve it's goals. The communication between a camera and an observer depends much on the distance between them and on the contents. If the observer is far away the information will reach the observer with delay, and if the medium carrying the information is unreliable the observer has to have this in mind. Lost information might not be acceptable for some applications, and some applications might not need it's information instantly.
In this master thesis, IP network communication for an automatic tolling station has been simulated where several video streams from different sources have to be synchronized. The quality of the images and the frame rate are both very important in these types of surveillance, where simultaneously exposed images are processed together.
The report includes short descriptions of some networking protocols, and descriptions of two implementations based on the protocols. The implementations were done in C++ using the basic socket API to evaluate the network communication. Two communication methods were used in the implementations, where the idea was to push or to poll images. To simulate the tolling station and create a network with several nodes a number of Raspberry Pis were used to execute the implementations. The report also includes a discussion about how and which video/image compression algorithms the system might benefit of.
The results of the network communication evaluation shows that the communication should be done using a pushing implementation rather than a polling implementation. A polling method is needed when the transportation medium is unreliable, but the network components were able to handle the amount of simultaneous sent information very well without control logic in the application.
@mastersthesis{diva2:730560,
author = {Forsgren, Gustav},
title = {{Multiple Synchronized Video Streams on IP Network}},
school = {Linköping University},
type = {{LiTH-ISY-EX--14/4776--SE}},
year = {2014},
address = {Sweden},
}
Web application vulnerabilities of critical are commonly found in web applications. The arguably most problematic class of web application vulnerabilities is SQL injections. SQL injection vulnerabilities can be used to execute commands on the database coupled to the web application, e.g., to extract the web application’s user and passwords data. Black box testing tools are often used (both by system owners and their adversaries) to discover vul- nerabilities in a running web application. Hence, how well they perform at discovering SQL injection vulnerabilities is of importance. This thesis describes an experiment assessing de- tection capability for different SQL injection vulnerabilities under different conditions. In the experiment the following is varied: SQL injection vulnerability (17 instances allowing tautologies, piggy-backed queries, and logically incorrect queries), scanners (four products), exploitability (three levels), input vector (POST/GET), and time investment (three levels). The number of vulnerabilities detected is largely determined by the choice of scanner (30% to 77%) and the input vector (71% or 38%). The interaction between the scanner and input vector is substantial since two scanners cannot handle the POST-vector at all. Substantial differences are also found between how well different SQL injection vulnerabilities are de- tected and the more exploitable variants are detected more often, as expected. The impact of time spent with the scan interacts with the scanner - some scanners required considerable time to configure and other did not – and as a consequence the relationship between time investments to detection capabilities is non-trivial.
@mastersthesis{diva2:717493,
author = {Norström, Alexander},
title = {{Measuring Accurancy of Vulnerability Scanners:
An Evaluation with SQL Injections}},
school = {Linköping University},
type = {{LiTH-ISY-EX--14/4748--SE}},
year = {2014},
address = {Sweden},
}
Intersections tests between meshes in physics engines are time consuming and computationalheavy tasks. In order to speed up these intersection tests, each mesh can be decomposedinto several smaller convex hulls where the intersection test between each pair of these smallerhulls becomes more computationally efficient.
The decomposition of meshes within the game industry is today performed by digital artistsand is considered a boring and time consuming task. Hence, the focus of this master thesislies in automatically decompose a mesh into several smaller convex hulls and to approximatethese decomposed pieces with bounding volumes of different complexity. These boundingvolumes together represents a collision mesh that is fully usable in modern games.
@mastersthesis{diva2:715850,
author = {Bäcklund, Henrik and Neijman, Niklas},
title = {{Automatic Mesh Decomposition for Real-time Collision Detection}},
school = {Linköping University},
type = {{LiTH-ISY-EX--14/4755--SE}},
year = {2014},
address = {Sweden},
}
This thesis details the results and conclusions of a project conducted at the game studio FromSoftware in Tokyo, Japan during the autumn of 2008. The aim ofthe project was the design and implementation of a system able to generate 3D graphical representations of road networks.
@mastersthesis{diva2:1467574,
author = {Jormedal, Martin},
title = {{Procedural Generation of Road Networks Using L-Systems}},
school = {Linköping University},
type = {{LiTH-ISY-EX--13/4706--SE}},
year = {2013},
address = {Sweden},
}
This thesis processes the work of developing CPU code and GPU code for Thomas Kaijsers algorithm for calculating the kantorovich distance and the performance between the two is compared. Initially there is a rundown of the algorithm which calculates the kantorovich distance between two images. Thereafter we go through the CPU implementation followed by GPGPU written in CUDA. Then the results are presented. Lastly, an analysis about the results and a discussion with possible improvements is presented for possible future applications.
@mastersthesis{diva2:683472,
author = {Engvall, Sebastian},
title = {{Kaijsers algoritm för beräkning av Kantorovichavstånd parallelliserad i CUDA}},
school = {Linköping University},
type = {{LiTH-ISY-EX-ET--13/0414--SE}},
year = {2013},
address = {Sweden},
}
Navigering med UAV:er kräver att farkostens och eventuellt ett måls position går att mäta ut under flygtillfället. I det allra flesta fall önskas en så hög noggrannhet som möjligt på positionen. DECCA är ett pensionerat system som var konstruerat för att läsa av fasförändringar mellan multiplar av en grundfrekvens. Systemet hade analoga mätare som gav utslag på förändringen av fasen och positionen kunde mätas ut på speciella kartor med inteferenslinjer som motsvarade en viss fasskillnad. GPS ger idag en precision ner på ett par meter, men med optimeringstekniker DGPS går det att komma ner i centimeternoggrannhet. Förutsättningen är att UAV:en befinner sig i ett geografiskt område med en minimal propagering på GPS-signalen. GPS är ett system som används idag och som fortfarande utvecklas. Fokus har lagts på att beskriva hur GPS går till väga för att få en hög precision på positioneringen och med de svårigheter som uppkommer med de långa avstånden mellan mottagaren och satelliten. En kort beskrivning över DECCA-systemets svagheter och styrkor ges, samt en kort sammanställning över hur DECCA och GPS skulle prestera i ett system med sändande antenner utplacerade över ett mindre geografisk område.
@mastersthesis{diva2:665658,
author = {Jakobsson, David and Jansson Stenroos, Erik},
title = {{Radiobaserad positionering för UAV'er}},
school = {Linköping University},
type = {{LiTH-ISY-EX-ET--13/0410--SE}},
year = {2013},
address = {Sweden},
}
To handle broken 3D models can be a very time consuming problem. Several methods aiming for automatic mesh repair have been presented in the recent years. This thesis gives an extensive evaluation of automatic mesh repair algorithms, presents a mesh repair pipeline and describes an implemented automatic mesh repair algorithm. The presented pipeline for automatic mesh repair includes three main steps: octree generation, surface reconstruction and ray casting. Ray casting is for removal of hidden objects. The pipeline also includes a pre processing step for removal of intersecting triangles and a post processing step for error detection. The implemented algorithm presented in this thesis is a volumetric method for mesh repair. It generates an octree in which data from the input model is saved. Before creation of the output, the octree data will be patched to remove inconsistencies. The surface reconstruction is done with a method called Manifold Dual Contouring. First new vertices are created from the information saved in the octree. Then there is a possibility to cluster vertices together for decimation of the output. Thanks to a special Manifold criterion, the output is guaranteedto be manifold. Furthermore the output will have sharp and clear edges and corners thanks to the use of Singular Value Decomposition during determination of the positions of the new vertices.
@mastersthesis{diva2:655691,
author = {Larsson, Agnes},
title = {{Automatic Mesh Repair}},
school = {Linköping University},
type = {{LiTH-ISY-EX--13/4720--SE}},
year = {2013},
address = {Sweden},
}
The usage of Unmanned Aerial Vehicles (UAV) for several applications has in-creased during the past years. One of the possible applications are aerial imagecapturing for detection and surveillance purposes. In order to make the captur-ing process more efficient, multiple, cameraequipped UAV:s could fly in a for-mation and as a result cover a larger area. To be able to receive several imagesequences and stitch those together, resulting in a panoramavideo, a softwareapplication has been developed and tested for this purpose.All functionality are developed in the language C++ by using the software li-brary OpenCV. All implementations of different techniques and methods hasbeen done as generic as possible to be able to add functionality in the future.Common methods in computervision and object recognition such as SIFT, SURF and RANSAC have been tested.
@mastersthesis{diva2:648227,
author = {Hagelin, Rickard and Andersson, Thomas},
title = {{Sammanfogning av videosekvenser från flygburna kameror}},
school = {Linköping University},
type = {{LiTH-ISY-EX-ET--13/0407--SE}},
year = {2013},
address = {Sweden},
}
Wavelength conversion and traffic grooming have been among the most researched areas and technologies of importance in optical networking. Network performance improves significantly by relaxing the wavelength continuity constraint using wavelength converters and by improving the wavelength utilization using traffic grooming. We have done a literature review that compares the performance of wavelength conversion devices with different traffic grooming devices. This thesis work analyzes the impact of increasing the number of wavelength conversion devices and grooming capable devices using different placement schemes for our proposed network model, traffic loads and link capacities. Deciding the number and location of these devices to be used in a network is equally important. This work has been done through the simulation of different device placement scenarios and the results have been analyzed using connection blocking probability as the performance metric. Our reviews and work, correctly predict the behavior of results as demonstrated by the results of other referred literatures relating to wavelength conversion and traffic grooming.
@mastersthesis{diva2:604407,
author = {Ali, Wajid and Mohammed, Shahzaan},
title = {{Analyzing Wavelength Conversion and Traffic Grooming in Optical WDM Networks}},
school = {Linköping University},
type = {{LiTH-ISY-EX--13/4651--SE}},
year = {2013},
address = {Sweden},
}
A new type of computing architecture called ePUMA is under development by the ePUMA Research Team at the Department of Electrical Engineering at Linköping University in Linköping. This contains several single instruction multiple data (SIMD) cores, which are called SIMD Units, where up to 64 computations can be done in parallel. The goal with the architecture is to create a low-power chip with good performance for embedded applications. One possible application is video games. In this work we have studied a selected set of video game related algorithms, including a Pseudo-Random Number Generator, Clipping and Rasterization & Fragment Processing, analyzing how well they fit the ePUMA platform.
@mastersthesis{diva2:575650,
author = {Tolunay, John},
title = {{Parallel gaming related algorithms for an embedded media processor}},
school = {Linköping University},
type = {{LiTH-ISY-EX--12/4641--SE}},
year = {2012},
address = {Sweden},
}
Auditory brainstem response (ABR) evaluation has been one of the most reliable methods for evaluating hearing loss. Clinically available methods for ABR tests require averaging for a large number of sweeps (~1000-2000) in order to obtain a meaningful ABR signal, which is time consuming. This study proposes a faster new method for ABR filtering based on wavelet-Kalman filter that is able to produce a meaningful ABR signal with less than 500 sweeps. The method is validated against ABR data acquired from 7 normal hearing subjects with different stimulus intensity levels, the lowest being 30 dB NHL. The proposed method was able to filter and produce a readable ABR signal using 400 sweeps; other ABR signal criteria were also presented to validate the performance of the proposed method.
@mastersthesis{diva2:564868,
author = {Alwan, Abdulrahman},
title = {{Implementation of Wavelet-Kalman Filtering Technique for Auditory Brainstem Response}},
school = {Linköping University},
type = {{LiTH-ISY-EX--12/4633--SE}},
year = {2012},
address = {Sweden},
}
A digital signature is the electronic counterpart to the hand written signature. It can prove the source and integrity of any digital data, and is a tool that is becoming increasingly important as more and more information is handled electronically.
Digital signature schemes use a pair of keys. One key is secret and allows the owner to sign some data, and the other is public and allows anyone to verify the signature. Assuming that the keys are large enough, and that a secure scheme is used, it is impossible to find the private key given only the public key. Since a signature is valid for the signed message only, this also means that it is impossible to forge a digital signature.
The most well-used scheme for constructing digital signatures today is RSA, which is based on the hard mathematical problem of integer factorization. There are, however, other mathematical problems that are considered even harder, which in practice means that the keys can be made shorter, resulting in a smaller memory footprint and faster computations. One such alternative approach is using elliptic curves.
The underlying mathematical problem of elliptic curve cryptography is different to that of RSA, however some structure is shared. The purpose of this thesis was to evaluate the performance of elliptic curves compared to RSA, on a system designed to efficiently perform the operations associated with RSA.
The discovered results are that the elliptic curve approach offers some great advantages, even when using RSA hardware, and that these advantages increase significantly if special hardware is used. Some usage cases of digital signatures may, for a few more years, still be in favor of the RSA approach when it comes to speed. For most cases, however, an elliptic curve system is the clear winner, and will likely be dominant within a near future.
@mastersthesis{diva2:550312,
author = {Krisell, Martin},
title = {{Elliptic Curve Digital Signatures in RSA Hardware}},
school = {Linköping University},
type = {{LiTH-ISY-EX--12/4618--SE}},
year = {2012},
address = {Sweden},
}
An animation system gives a dynamic and life-like feel to character motions, allowing motion behaviour that far transcends the mere spatial translations of classic computer games. This increase in behavioural complexity however does not come for free as animation systems often are haunted by considerable performance overhead, the extent of which reflecting the complexity of the desired system.
In game development performance optimization is key, the pursuit of which is aided by the static hardware configuration of modern gaming consoles. These allow extensive optimization through specializing the application, at whole or in part, to the underlying hardware architecture.
In this master's theses a method, that efficiently utilizes the parallel architecture of the PlayStation®3, is proposed in order to migrate the process of animation evaluation and blending from a single-thread implementation on the main processor to a fully parallelized multi-thread solution on the associated coprocessors. This method is further complimented with an in-depth study of the underlying theoretical foundations, as well as a reflection on similar works and approaches as used by other contemporary game development companies.
@mastersthesis{diva2:541451,
author = {Jakobsson, Teodor},
title = {{Parallelization of Animation Blending on the PlayStation®3}},
school = {Linköping University},
type = {{LiTH-ISY-EX--12/4561--SE}},
year = {2012},
address = {Sweden},
}
This report describes a master thesis performed at Degoo Backup AB in Stockholm, Sweden in the spring of 2012. The purpose was to design a compression suite in Java which aims to improve the compression ratio for file types assumed to be commonly used in a backup software. A tradeoff between compression ratio and compression speed has been made in order to meet the requirement that the compression suite has to be able to compress the data fast enough. A study of the best performing existing compression algorithms has been made in order to be able to choose the best suitable compression algorithm for every possible scenario and file type specific compression algorithms have been developed in order to further improve the compression ratio for files considered needing improved compression. The resulting compression performance is presented for file types assumed to be common in a backup software and the overall performance is good. The final conclusion is that the compression suite fulfills all requirements set of this thesis.
@mastersthesis{diva2:537095,
author = {Zeidlitz, Mattias},
title = {{Improving compression ratio in backup}},
school = {Linköping University},
type = {{LITH-ISY-EX--12/4588--SE}},
year = {2012},
address = {Sweden},
}
Web applications are becoming increasingly sophisticated and functionality that was once exclusive to regular desktop applications can now be found in web applications as well. One of the more recent advances in this field is the ability for web applications to render 3D graphics. Coupled with the growing number of devices with graphics processors and the ability of web applications to run on many different platforms using a single code base, this represents an exciting new possibility for developers of 3D graphics applications.
This thesis aims to explore and evaluate the technologies for 3D graphics that can be used in web applications, with the final goal of using one of them in a prototype application. This prototype will serve as a foundation for an application to be included in a commercial product. The evaluation is performed using general criteria so as to be useful for other applications as well, with one part presenting the available technologies and another part evaluating the three most promising technologies more in-depth using test programs.
The results show that, although some technologies are not production-ready, there are a few which can be used in commercial software, including the three chosen for further evaluation; WebGL, the Java library JOGL and Stage 3D for Flash. Among these, there is no clear winner and it is up to the application requirements to decide which to use. The thesis demonstrates an application built with WebGL and shows that fairly demanding 3D graphics web applications can be built. Also included are the lessons learned during the development and thoughts on the future of 3D graphics in web applications.
@mastersthesis{diva2:536657,
author = {Waern\'{e}r, Klara},
title = {{3D Graphics Technologies for Web Applications:
An Evaluation from the Perspective of a Real World Application}},
school = {Linköping University},
type = {{LiTH-ISY-EX--12/4562--SE}},
year = {2012},
address = {Sweden},
}
MPLS is a widely used technology in the service providers and enterprise networks across the globe. MPLS-enabled infrastructure has the ability to transport any type of payload (ATM, Frame Relay and Ethernet) over it, subsequently providing a multipurpose architecture. An incoming packet is classified only once as it enters into the MPLS domain and gets assigned label information; thereafter all decision processes along a specified path is based upon the attached label rather than destination IP addresses. As network applications are becoming mission critical, the requirements for fault tolerant networks are increasing, as a basic requirement for carrying sensitive traffic. Fault tolerance mechanisms as provided by an IP/MPLS network helps in providing end to end “Quality of Service” within a domain, by better handling blackouts and brownouts. This thesis work reflects how MPLS increases the capability of deployed IP infrastructure to transport traffic in-between end devices with unexpected failures in place. It also focuses on how MPLS converts a packet switched network into a circuit switched network, while retaining the characteristics of packet switched technology. A new mechanism for MPLS fault tolerance is proposed.
@mastersthesis{diva2:514343,
author = {Kebria, Muhammad Roohan},
title = {{Analyzing IP/MPLS as Fault Tolerant Network Architecture}},
school = {Linköping University},
type = {{LiTH-ISY-EX--12/4551--SE}},
year = {2012},
address = {Sweden},
}
I dagens växande spelindustri är det vanligt att abstrahera ut delar av kodbasen till ett så kallat skriptspråk. Genom dennaskriptdel är det vanligt att det programmeras "in-game"-nära handlingar som kan utvecklas av rena skriptare istället förprogrammerare som istället kan koncentrera sig på att utveckla själva spelmotorn. Det finns en uppsjö av olika skriptspråksom alla har sina för- och nackdelar. Ett av de mest kända språken inom spelindustrin än Lua.
Paradox Interactive har själva utvecklat ett eget skriptspråk som de använder. Deras önskemål var att undersöka hurvida detär möjligt att idag använda Lua istället. I detta examensarbete utvecklas det en prototyp som visar att det är möjligt attöversätta skripten i Paradox Interactives spel Europa Universalis 3 till skript som istället exekveras med skriptspråket Lua.
Rapporten går igenom hur det nuvarande språket fungerar, vad Luas grundläggande byggstenar är och slutligen huröversättningen gick till. Rapporten avslutas med en jämförelse av de olika skriptsystemen där exekveringstider mäts upp ochen diskussion kring resultatet och eventuella förbättringar behandlas.
@mastersthesis{diva2:509755,
author = {Rönn, Jimmy},
title = {{Översättning av självutvecklat skriptspråk till Lua i spelmotor.}},
school = {Linköping University},
type = {{LiTH-ISY-EX-ET--12/0387--SE}},
year = {2012},
address = {Sweden},
}
In today's business Information technology (IT) and Information plays a key role. Due todevelopment and influence of Information Technology, using systems, IT services andnetworks cannot be avoided in the business and they all need to be protected and secured.In order to ensure such a higher sort of security and protection, the Information securitysystem (ISS) have been used. Still the businesses today are enveloped with higher risks andupshots which are also being narrower and keeping changed consistently. At suchcircumstance the solution providing method should be very unique and narrower to each andevery slot of business, for a competitive and higher security. Thus such compact solutionsbeen given by Business Continuity Planning (BCP) method. Business Continuity Plan, a chiefidea engendered from the stream of information security.This research involves with a case study in regard to the Railway sector in making a BusinessContinuity Planning (BCP) on Network security, System Security and Physical Security of it.Thus the way of presentation been more systematically followed up in order to make thereader to understand the results more easily.Following in the Chapter 1 and Chapter 2, the Introduction and background studies which areneeded to be known to draw a BCP plan on Network, System and Physical Securities. Chapter3 Result section, will gives the recommendation that need to be followed for drawing aNetwork, System and Physical Securities in a railway network.
@mastersthesis{diva2:483470,
author = {Govindarajan, Arulmozhivarman},
title = {{Business Continuity Planning in the IT Age - A railway sector case study}},
school = {Linköping University},
type = {{LITH-ISY-EX--11/4539--SE}},
year = {2012},
address = {Sweden},
}
The aim of this thesis is to create a computer program that simulates the motionof cells in a developing embryo. The resulting simulator is to be used by in the CellLineage project (Robert Forchheimer et al.) as an input to their genetic model, themeta-Boolean model [18]. This genetic model is not the focus of this work. Sincethe simulated system is highly complex, with fluids and deforming soft bodies, itis unfeasible to simulate the system in a physically realistic manner while keepingexecution time to reasonable values. Therefore some physical realism is sacrificedin favor of simulation stability and execution speed.The resulting simulator, Cell-Lab, uses Position Based Dynamics (PBD) [17] toimplement a number of different models for the cell’s mechanical properties. PBDis well suited for this purpose since it, while not taking excessively long time toexecute, guarantees an unconditionally stable simulation. The simulator includesa hard eggshell surrounding the cells. Cells can be split during the simulation,emulating mitosis. There is also the possibility to simulate cell adhesion usinga cadherin like mechanism. To control when and how cells are split and fetchinformation about the current state of the simulation there is an interface to beused by external applications. The meta-Boolean model can be implemented insuch an application
@mastersthesis{diva2:743653,
author = {Jonsson, Emil},
title = {{Modeling and Simulation of Cells}},
school = {Linköping University},
type = {{LiTH-ISY-EX--11/4365--SE}},
year = {2011},
address = {Sweden},
}
Information security has always been a topic of concern in the world as an emphasis on new techniques to secure the identity of a legitimate user is regarded as top priority. To counter such an issue, we have a traditional way of authentication factors “what you have” and “what you know” in the form of smart cards or passwords respectively. But biometrics is based on the factor “who are you” by analyzing human physical or behavioral characteristics. Biometrics has always been an efficient way of authorization and is now considered as a $1500 million industry where fingerprints dominate the biometrics while iris is quickly emerging as the most desirable form of biometric technique.The main goal of this thesis is to compare and evaluate different biometrics techniques in terms of their purpose, recognition mechanism, market value and their application areas. Since there are no defined evaluating criteria, my method of evaluation was based on a literature survey from internet, books, IEEE papers and technical surveys. Chapter 3 is focused on different biometrics techniques where I discuss them briefly but in chapter 4, I go deeper into Iris, fingerprints, facial techniques which are prominent in biometrics world. Lastly, I had a general assessment of the biometrics, their future growth and suggested specific techniques for different environment like access controls, e-commerce, national ids, and surveillance.
@mastersthesis{diva2:469340,
author = {Zahidi, Salman},
title = {{Biometrics - Evaluation of Current Situation}},
school = {Linköping University},
type = {{LiTH-ISY-EX--11/4535--SE}},
year = {2011},
address = {Sweden},
}
Choosing a 3D file format is a difficult task, as there exists a countless number of formats with different ways of storing the data. The format may be binary or clear text using XML, supporting a lot of features or just the ones that is currently required and there may be an official, or just an unofficial, specification available. This thesis compares four different 3D file formats by how they handle specific features; meshes, animation and materials.
The file formats were chosen based on if they could be exported by the 3D computer graphics software Blender, if they supported the required features and if they appeared to have some form of complete specification. The formats were then evaluated by looking at the available specification and, if time permitted, creating a parser. The chosen formats were COLLADA, B3D, MD2 and X.
The comparison was then conducted, comparing how they handled meshes, animation, materials, specification and file size. This was followed by a more general discussion about the formats.
@mastersthesis{diva2:462098,
author = {Lundgren, Marcus},
title = {{A comparison of 3D file formats}},
school = {Linköping University},
type = {{LiTH-ISY-EX-ET--11/0384--SE}},
year = {2011},
address = {Sweden},
}
Image completion is a process of removing an area from a photograph and replacing it with suitable data. Earlier methods either search for this relevant data within the image itself, or extends the search to some form of additional data, usually some form of database.
Methods that search for suitable data within the image itself has problems when no suitable data can be found in the image. Methods that extend their search has in earlier work either used some form of database with labeled images or a massive database with photos from the Internet. For the labels in a database to be useful they typically needs to be entered manually, which is a very time consuming process. Methods that uses databases with millions of images from the Internet has issues with copyrighted images, storage of the photographs and computation time.
This work shows that a small database of the user’s own private, or professional, photos can be used to improve the quality of image completions. A photographer today typically take many similar photographs on similar scenes during a photo session. Therefore a smaller number of images are needed to find images that are visually and structurally similar, than when random images downloaded from the internet are used.
Thus, this approach gains most of the advantages of using additional data for the image completions, while at the same time minimizing the disadvantages. It gains a better ability to find suitable data without having to process millions of irrelevant photos.
@mastersthesis{diva2:442585,
author = {Dalkvist, Mikael},
title = {{Image Completion Using Local Images}},
school = {Linköping University},
type = {{LiTH-ISY-EX--11/4506--SE}},
year = {2011},
address = {Sweden},
}
Smartphones are one of the most popular technology gadgets on the market today. The number of devices in the world is growing incredibly fast and they have today taken an important place in many person's everyday life. They are small, powerful, always connected to the Internet and they are usually containing a lot of personal information such as contact lists, pictures and stored passwords. They are sometimes even used as login tokens for Internet bank services and web sites. Smartphones are, undoubtedly, incredible devices! But are smartphones secure and is stored information safe? Can and should these devices be trusted to keep sensitive information and personal secrets? Every single day newspapers and researcher warns about new smartphone malwares and other security breaches regarding smartphones. So, are smartphones safe to use or are they a spy's best friend in order to surveil a person? Can a user do anything to make the device more secure and safe enough to use it in a secure manner? All these questions are exactly what this paper is about!
This paper is addressing two popular smartphone platforms, iOS and BlackBerry OS, in order to evaluate how secure these systems are, what risks that occur when using them and how to harden the platform security to make these platforms as secure and safe to use as possible. Another aim of this paper is to discuss and give suggestions on how a separate and dedicated hardware token can be used to improve the platform security even further. In order to evaluate the security level of these platforms, a risk and threat analysis has been made as well as some practical testing to actually test what can be done. The test part consists mostly of a proof-of-concept spyware application implemented for iOS and an IMSI-catcher used to eavesdrop on calls by using a rogue GSM base transceiver station.
The implemented spyware was able to access and transfer sensitive data from the device to a server without notifying the user about it. The rogue base station attack was even scarier since with only a few days work and equipment for less than $1500 can smartphones be tricked to connect to a rogue base station and all outgoing calls can be intercepted and recorded. The risk analysis resulted in not less than 19 identified risks with mixed severity of the impact. Some configurations and usage recommendation is given in order to prevent or mitigate these risks to make the usage of these platforms safer. The aim of suggesting how a hardware token can be used to strengthening these platforms have been a bit of failure since no really working suggestion has been possible to give. It is a result of that these systems are tightly closed for modification by third parties, and such modifications are needed in order to implement a working hardware token. However, a few partial suggestions for how such a token can work are given.
The result of this work indicates that neither iOS nor BlackBerry OS is entirely secure and both need to be configured and used in a correct way to be safe for the user. The benefits of a hardware token should be huge for these systems but the implementations that are possible to do is not enough and it might not be of interest to implement a hardware token for these systems at the moment. Some of the identified risks require the attacker to have physical access to the device and this can only be prevented if the user is careful and acts wisely. So, if you want to use high technology gadgets such as smartphones, be sure to be a smart user!
@mastersthesis{diva2:439481,
author = {Hansson, Fredrik},
title = {{System Integrity for Smartphones:
A security evaluation of iOS and BlackBerry OS}},
school = {Linköping University},
type = {{LiTH-ISY-EX--11/4494--SE}},
year = {2011},
address = {Sweden},
}
This project aims to make 3D modeling easy through the use of augmented reality. Black and white markers are used to augment the virtual objects. Detection of these is done with help from ARToolKit, developed at University of Washington.
The model is represented by voxels, and visualised through the marching cubes algorithm. Two physical tools are available to edit the model; one for adding and one for removing volume. Thus the application is similar to sculpting or drawing in 3D.
Thee resulting application is both easy to use and cheap in that it does not require expensive equipment.
@mastersthesis{diva2:411715,
author = {Schlaug, Frida},
title = {{3D Modeling in Augmented Reality}},
school = {Linköping University},
type = {{LITH-ISY-EX-ET--10/0379--SE}},
year = {2011},
address = {Sweden},
}
The thesis investigates the temperature distribution in the chip of an infrared camera caused by its read out integrated circuit. The heat from the read out circuits can cause distortions to the thermal image. Knowing the temperature gradient caused by internal heating, it will later be possible to correct the image by implementing algorithms subtracting temperature contribution from the read out integrated circuit.
The simulated temperature distribution shows a temperature gradient along the edges of the matrix of active bolometers. There are also three hot spots at both the left and right edge of the matrix, caused by heat from the chip temperaturesensors and I/O pads. Heat from the chip temperature sensors also causes an uneven temperature profile in the column of reference pixels, possibly causing imperfections in the image at the levels of the sensors.
Simulations of bolometer row biasing are carried out to get information about how biasing affects temperatures in neighbouring rows. The simulations show some row-to-row interference, but the thermal model suffers from having biasing heat inserted directly onto the top surface of the chip, as opposed to having heat originate from the bolometers. To get better simulation results describing the row biasing, a thermal model of the bolometers needs to be included.
The results indicate a very small temperature increase in the active pixel array, with temperatures not exceeding ten millikelvin. Through comparisons with another similar simulation of the chip, there is reason to believe the simulated temperature increase is a bit low. The other simulation cannot be used to draw any conclusions about the distribution of temperature.
@mastersthesis{diva2:400652,
author = {Salomonsson, Stefan},
title = {{Simulation of Temperature Distribution in IR Camera Chip}},
school = {Linköping University},
type = {{LiTH-ISY-EX--11/4421--SE}},
year = {2011},
address = {Sweden},
}
Körsimulatorer är idag en mycket viktig resurs för att utföra studier med fokus på förarbeteende. Så väl full kontroll överscenario och miljö som kostnad och säkerhet är aspekter som gör det fördelaktigt att utföra simulatorstudier gentemotstudier i den riktiga trafiken.
Ett problem med körsimulatorer är att bilden projiceras på en tvådimensionell skärm, vilket begränsar förarens förmåga attuppskatta avstånd och hastighet. Det är allmänt känt att avstånd och hastighet underskattas i körsimulatorer.
Målet med examensarbetet var att hitta metoder som kan ge förbättrad avståndsbedömning i körsimulatorer och underprojektet implementerades och testades rörelseparallax samt skuggor, med största fokus på det förstnämnda.
I slutet av projektet genomfördes ett simulatorförsök för att utvärdera effekten av rörelseparallax. Tio försökspersoner fickgöra två körningar vardera i VTI:s Simulator III-anläggning, den ena med rörelseparallax aktiverat och den andra meddetsamma inaktiverat. Scenariot som utspelade sig under körningarna innehöll ett flertal omkörningssituationer samt etthastighetsuppfattningstest.
Resultaten från simulatorförsöket visade att försökspersonerna tenderade att placera sig längre från mittlinjen närrörelseparallax var aktiverat i de situationer som sikten skymdes av framförvarande fordon.
@mastersthesis{diva2:393398,
author = {Andersson Hultgren, Jonas},
title = {{Metoder för förbättrad rumsuppfattning i körsimulatorer}},
school = {Linköping University},
type = {{LITH-ISY-EX--11/4442--SE}},
year = {2011},
address = {Sweden},
}
This thesis work regards free viewpoint TV. The main idea is that users can switch between multiple streams in order to find views of their own choice. The purpose is to provide fast switching between the streams, so that users experience less delay while view switching. In this thesis work we will discuss different video stream switching methods in detail. Then we will discuss issues related to those stream switching methods, including transmission and switching. We shall also discuss different scenarios for fast stream switching in order to make services more interactive by minimizing delays.
Stream switching time varies from live to recorded events. Quality of service (QoS) is another factor to consider which can be improved by assigning priorities to the packets. We will discuss simultaneous stream transmission methods which are based on predictions and reduced quality streams for providing fast switching. We will present prediction algorithm for viewpoint prediction, propose system model for fast viewpoint switching and make evaluation of simultaneous stream transmission methods for free viewpoint TV. Finally, we draw our conclusions and propose future work.
@mastersthesis{diva2:380465,
author = {Hussain, Mudassar},
title = {{Free Viewpoint TV}},
school = {Linköping University},
type = {{LiTH-ISY-EX--10/4437--SE}},
year = {2010},
address = {Sweden},
}
Sectra Communications is today developing cryptographic products for high assurance environments with rigorous requirements on separation between encrypted and un-encrypted data. This separation has traditionally been achieved through the use of physically distinct hardware components, leading to larger products which require more power and cost more to produce compared to systems where lower assurance is required.
An alternative to hardware separation has emerged thanks to a new class of operating systems based on the "separation kernel" concept, which offers verifiable separation between software components running on the same processor comparable to that of physical separation. The purpose of this thesis was to investigate the feasibility in developing a product based on a separation kernel and which possibilities and problems with security evaluation would arise.
In the thesis, a literature study was performed covering publications on the separation kernel from a historical and technical perspective, and the development and current status on the subject of software evaluation. Additionally, a software crypto demonstrator was partly implemented in the separation kernel based Green Hills Integrity operating system.
The thesis shows that the separation kernel concept has matured significantly and it is indeed feasible to begin using this class of operating systems within a near future. Aside from the obvious advantages with smaller amounts of hardware, it would give greater flexibility in development and potential for more fine-grained division of functions. On the other hand, it puts new demands on developers and there is also a need for additional research about some evaluation aspects, failure resistance and performance.
@mastersthesis{diva2:375768,
author = {Frid, Jonas},
title = {{Security Critical Systems in Software}},
school = {Linköping University},
type = {{LiTH-ISY-EX--10/4377--SE}},
year = {2010},
address = {Sweden},
}
Even though speaker verification is a broad subject, the commercial and personal use implementations are rare. There are several problems that need to be solved before speaker verification can become more useful. The amount of pattern matching and feature extraction techniques is large and the decision on which ones to use is debatable. One of the main problems of speaker verification in general is the impact of noise. The very popular feature extraction technique MFCC is inherently sensitive to mismatch between training and verification conditions. MFCC is used in many speech recognition applications and is not only useful in text-dependent speaker verification. However the most reliable verification techniques are text-dependent. One of the most popular pattern matching techniques in text-dependent speaker verification is DTW. Although having limitations outside the text-dependent applications it is a reliable way of matching templates even with limited amount of training material. The signal processing techniques, MFCC and DTW are explained and discussed in detail along with a Matlab program where these techniques have been implemented. The choices made in signal processing, feature extraction and pattern matching are determined by discussions of available studies on these topics. The results indicate that it is possible to program text-dependent speaker verification systems that are functional in clean conditions with tools like Matlab.
@mastersthesis{diva2:360241,
author = {Tolunay, Atahan},
title = {{Text-Dependent Speaker Verification Implemented in Matlab Using MFCC and DTW}},
school = {Linköping University},
type = {{LiTH-ISY-EX--10/4452--SE}},
year = {2010},
address = {Sweden},
}
An IP based set-top box (STB) is essentially a lightweight computer used to receive video over the Internet and convert it to analog or digital signals understood by the television. During this transformation from a digital image to an analog video signal many different types of distortions can occur. Some of these distortions will affect the image quality in a negative way. If these distortions could be measured they might be corrected and give the system a better image quality.
This thesis is a continuation of two previous theses where a custom hardware for sampling analog component video signals was created. A software used to communicatewith the sampling hardware and perform several different measurementson the samples collected has been created in this thesis.
The analog video signal quality measurement system has been compared to a similar commercial product and it was found that all except two measurement methods gave very good results. The remaining two measurement methods gave acceptable result. However the differences might be due to differences in implementation. The most important thing for the measurement system is to have consistency. If a system has consistency then any changes leading to worse videoquality can be found.
@mastersthesis{diva2:357787,
author = {Ljungström, Carl},
title = {{Design and Implementation of an Analog Video Signal Quality Measuring Software for Component Video}},
school = {Linköping University},
type = {{LITH-ISY-EX--10/4206--SE}},
year = {2010},
address = {Sweden},
}
This bachelor thesis is a literature study of the possibility to analyze and modify speech signals and will act as a pilotstudy for future theses in speaker verification.The thesis deals with the voice anatomy and physiology, synthesizer history and the various methods available when thevoice is used as a biometric method.A search and evaluation of existing programs have been conducted to determine the relevance of the attacks on theparameters used for speaker verification
@mastersthesis{diva2:355704,
author = {Eriksson, Madeleine},
title = {{Litteraturstudie om möjligheterna att analysera och modifiera talsignaler}},
school = {Linköping University},
type = {{LiTH-ISY-EX-ET--10/0367--SE}},
year = {2010},
address = {Sweden},
}
A method for constructing a highly scalable bit stream for video coding is presented in detail and implemented in a demo application with a GUI in the Windows Vista operating system.
The video codec uses the Discrete Wavelet Transform in both spatial and temporal directions together with a zerotree quantizer to achieve a highly scalable bit stream in the senses of quality, spatial resolution and frame rate.
@mastersthesis{diva2:330620,
author = {Johansson, Gustaf},
title = {{Scalable video coding using the Discrete Wavelet Transform:
Skalbar videokodning med användning av den diskreta wavelettransformen}},
school = {Linköping University},
type = {{LITH-ISY-EX--10/4209--SE}},
year = {2010},
address = {Sweden},
}
Many different approaches have been taken towards solving the stereo correspondence problem and great progress has been made within the field during the last decade. This is mainly thanks to newly evolved global optimization techniques and better ways to compute pixel dissimilarity between views. The most successful algorithms are based on approaches that explicitly model smoothness assumptions made about the physical world, with image segmentation and plane fitting being two frequently used techniques.
Within the project, a survey of state of the art stereo algorithms was conducted and the theory behind them is explained. Techniques found interesting were implemented for experimental trials and an algorithm aiming to achieve state of the art performance was implemented and evaluated. For several cases, state of the art performance was reached.
To keep down the computational complexity, an algorithm relying on local winner-take-all optimization, image segmentation and plane fitting was compared against minimizing a global energy function formulated on pixel level. Experiments show that the local approach in several cases can match the global approach, but that problems sometimes arise – especially when large areas that lack texture are present. Such problematic areas are better handled by the explicit modeling of smoothness in global energy minimization.
Lastly, disparity estimation for image sequences was explored and some ideas on how to use temporal information were implemented and tried. The ideas mainly relied on motion detection to determine parts that are static in a sequence of frames. Stereo correspondence for sequences is a rather new research field, and there is still a lot of work to be made.
@mastersthesis{diva2:328101,
author = {Olofsson, Anders},
title = {{Modern Stereo Correspondence Algorithms:
Investigation and Evaluation}},
school = {Linköping University},
type = {{LiTH-ISY-Ex--10/4432--SE}},
year = {2010},
address = {Sweden},
}
This master thesis investigates different approaches to data compression on common types of signals in the context of localization by estimating time difference of arrival (TDOA). The thesis includes evaluation of the compression schemes using recorded data, collected as part of the thesis work. This evaluation shows that compression is possible while preserving localization accuracy.
The recorded data is backed up with more extensive simulations using a free space propagation model without attenuation. The signals investigated are flat spectrum signals, signals using phase-shift keying and single side band speech signals. Signals with low bandwidth are given precedence over high bandwidth signals, since they require more data in order to get an accurate localization estimate.
The compression methods used are transform based schemes. The transforms utilized are the Karhunen-Loéve transform and the discrete Fourier transform. Different approaches for quantization of the transform components are examined, one of them being zonal sampling.
Localization is performed in the Fourier domain by calculating the steered response power from the cross-spectral density matrix. The simulations are performed in Matlab using three recording nodes in a symmetrical geometry.
The performance of localization accuracy is compared with the Cramér-Rao bound for flat spectrum signals using the standard deviation of the localization error from the compressed signals.
@mastersthesis{diva2:325175,
author = {Arbring, Joel and Hedström, Patrik},
title = {{On Data Compression for TDOA Localization}},
school = {Linköping University},
type = {{LiTH-ISY-EX--10/4352--SE}},
year = {2010},
address = {Sweden},
}
The widespread use of computer technology for information handling resulted in the need for higher data protection.The usage of high profile cryptographic protocols and algorithms do not always necessarily guarantee high security. They are needed to be used according to the needs of the organization depending upon certain characteristics and available resources.The communication system in a cryptographic environment may become vulnerable to attacks if the cryptographic packages don’t meet their intended goals.
This master’s thesis is targeted towards the goal of evaluating contemporary cryptographic algorithms and protocols collectively named as cryptographic packages as per security needs of the organization with the available resources.
The results have shown that there certainly is a need for careful evaluations of cryptographic packages given with available resources otherwise it could turn into creating more severe problems such as network bottlenecks, information and identity loss, non trustable environment and computational infeasibilities resulting in huge response times. In contrast, choosing the right package with right security parameters can lead to a secure and best performance communication environment.
@mastersthesis{diva2:209420,
author = {Raheem, Muhammad},
title = {{Evaluation of Cryptographic Packages}},
school = {Linköping University},
type = {{LITH-ISY-EX--09/4159- - SE}},
year = {2009},
address = {Sweden},
}
Senast uppdaterad: 2011-09-06
LiU Homepage
