Department of Computer Science
Summer Internship Programme
UCL Logo

Summer Internships


Current Projects

Previous Projects


Funding

Internship Requirements


Industy Projects

Summer 2011

Projects for summer 2011 will be advertised here. Please revisit to view any recently added projects.

See Previous Projects to get an idea of past projects.


Python Programmer

Part-time or full time summer position available (potentially extendable through the 2011-12 academic year) with a UCL research group developing www.hydroplatform.org, a data manager and visualiser for network models (water, energy, transport, etc.).

Skills: Python - experience or interest in the language is essential databases (for now sqllite) - valuable

Information found at: www.hydroplatform.org; you can get to the trac website by clicking on 'screenshots'. Opportunities in the future may open up though our involvement in http://www.waterhackathon.org/ and http://www.rhok.org/ and joint research with Emmanuel Letier (UCL C.S.).

Please contact Julien Harou asap at j.harou@ucl.ac.uk www.cege.ucl.ac.uk/staff?ID=858


Mobile app for automated personal budgeting

The internship is to develop a prototype / demonstrator for a smart phone driven financial service, on behalf of a research project in the department called “PVNets”. An additional outcome will be some documentation about the design. The prototype / demonstrator is only loosely defined – so the role will involve functional and non-function specification, user interface design as well as coding. Some independent research will be required at the beginning to investigate what allied solutions already exist, and can be learned from.

This service aims to help people more easily conserve and optimise the money they have by collecting and using data about their spending, indebtedness and cash balance, and collecting and using data about their long term goals. The project is being run in collaboration with financial transaction technology business Consult Hyperion (www.chyp.com), who will be hosting another intern on the project. You may have the opportunity of working with developers at Consult Hyperion’s prototyping lab. The role will involve co-ordination with the intern and project management team at Consult Hyperion, and regular travel to Guildford to their offices (travel expenses will be paid). We’re looking for a second / final year student, predicted 2:1 or a first.

Start Date: 17th June

Duration: 8-10 weeks

Contact: Sacha Brostoff

Funding: Research Group/Matching CS bursary funds


Glass CAVE Demo[taken]

The idea of this project is to create an animated character for the UCL CS CAVE system. The character will give a speech, but the speech will be about how the CAVE itself works. The demonstration is called the "Glass CAVE" because the walls of the CAVE will appear to be transparent and will show a 3D rendering of what is behind the walls of the CAVE. Thus all the equipment (projectors, tracker, rendering machines, etc.) will be "virtually" visible. The idea is to make a standard demonstration that we can show visitors to the CAVE.

Start date: ASAP after exam period

Duration: 8 - 10 weeks

Contact: Anthony Steed

Funding: Research Group/Matching CS bursary funds


Crowd-Sourcing for Real-Time Information Collection in the Public Transport Domain[taken]

While disruptions to public transport services are unavoidable, much can be done to improve the end-user experience when disruptions strike. Static timetable information is already being complemented with real-time information, for example, telling travellers how long they have to wait for the next bus to come. However, this information is computed purely based on geographical distance; it thus does not take into account traffic, roadworks, incidents, and the resulting delays these may cause. A different approach is called for, capable of providing travellers with reliable information so to enable them to make informed choices about their travel options.

To achieve this goal, this project intends to explore a new computing paradigm called crowd sourcing: rather than fully relying on the transport authority as a travel information source, the task of collecting live updates about the transport network is assigned to the end-users (i.e., the travellers) themselves. Crowd sourcing has been made popular by Web 2.0 technologies, and has been deployed online to achieve a wide variety of business goals (e.g., via Mechanical Turk). The question we aim to explore with this project is whether this computing paradigm is suitable in completely decentralised and highly dynamic settings, where the crowd contributing to the task is continuously changing (e.g., depending on who is riding the bus right now), the task to be performed (i.e., collecting travel information) has no expiry date and yet has to be performed continuously and in real-time, and where the incentives to contribute do not come from monetary rewards but purely from personal (community) interest.

The project will consist of three main steps: (1) first, a study of a variety of crowd sourcing paradigms and projects will be conducted, in order to assess successes and failures of different business and incentive models in different contexts; the outcome of this step will be a written report. (2) Second, a mobile phone application will be developed (free choice of platform), that enables people on one side to "check-in" to a bus, to provide live service updates, and on the other side to connect to TfL live departure boards, while complementing this information with crowd-sourced information. (3) Finally, a small users study will be conducted, to assess the usability of the application in practice.

Start date: ASAP after exam period

Duration: 10-12 weeks

Contact: Licia Capra

Funding: [EPSRC or UCL/CS or EC]


Prediction Models for Real-Time Information Distribution in the Public Transport Domain[taken]

To help people navigate the public transport infrastructure, cities like London offer free travel alert services (http://alerts.tfl.gov.uk/): registered users specify in what tube stations/lines they are interested in and during what periods of time, (e.g., from Highgate to Goodge Street, Monday to Friday, between 7.30am and 8.30am) and the system will automatically send them text messages whenever disruptions affecting these segments occur. While useful, the system is quite rudimentary: if a user deviates from the pre-registered information (e.g,. should he be travelling at a different time, to a different destination, or not ravelling at all), the system is providing him with completely useless information. A different approach is called for, where the users' travel patterns are automatically monitored and learned over time, so that the registration process can be fully automated and dynamically updated without user intervention; only travel alerts which are then useful given the user current situation are then pushed to the him.

To achieve this goal, we have conducted an initial study of two large tube datasets provided by TfL, each covering a 3-month period, and containing more than 7M trips made by about 300K users. The study shows that, despite individual variations, users exhibit a certain regularity of movement which can be automatically learned over time. We have mined this information to make better travel time predictions than current methods. The goal of this project is to explore the applicability of similar techniques to predict users' interest in terms of what stations and and what times, thus enabling a more accurate and transparent travel alert service.

The project will consist of three main steps: (1) first, a review of a variety of prediction models will be undertaken, in order to understand strength and weaknesses of these techniques, specifically when dealing with public transport data. (2) Second, a mobile phone application will be developed (free choice of platform), that monitors people movement at regular intervals, and that uses this information to dynamically compute a users' travel profile (what stations/at what times), thus informing them of only those travel updates which are deemed relevant there and then. (3) Finally, a small users study will be conducted, to assess the accuracy of the application in practice, with respect to the currently available TfL alert services.

Start date: ASAP after exam period

Duration: 10-12 weeks

Contact: Licia Capra

Funding: [EPSRC or UCL/CS or EC]


Multi-criteria decision making for sustainability management[taken]

Sustainability management systems are critical enabling technologies which help to address the challenges of climate change, rising energy costs, and preservation of a healthy environment. These systems help organisations to collect and make sense of large volume of data concerning a range of sustainability indicators related notably to energy, travel, waste and water. The information provided by these systems is then used to support strategic decisions for improving the efficiency of an organization with respect to its sustainability goals, monitor their impact, and revise them if needed.

This internship will be a part of an on-going research project in this domain. The objectives of the internship will be to study multi-criteria decision analysis techniques that can be used in this context and develop tool support for such analysis. The project will involve analysing UCL energy and carbon footprint data as a whole as well as fine-grained data for a few specific buildings.

The ideal candidate will have strong mathematical, programming, and creative skills.

Start date: ASAP after exam period

Duration: 8 - 10 weeks

Contact: Emmanuel Letier (e.letier@cs.ucl.ac.uk)

Funding: EPSRC or CS bursary


Follow the Leader[taken]

Live music performance by ensembles such as pop or jazz bands requires the musicians to track and predict the underlying beat of the music that they and the others are playing. When members of the band are absent, it would be helpful if a computer could step in to play the missing part or parts. This means that the computer has to (among other things) track the beat for itself. Although beat-tracking in audio is a well-studied problem, managing the problem robustly in the context of a popular music ensemble presents new challenges in reconciling multiple beat sources that drop in and out, potentially based on models of the ensemble. This project will develop and compare methods and software to reconcile multiple sources of beat information building on existing beat tracking methods (e.g. aubio) in puredata or other appropriate languages.

The project outcomes will be algorithms and implementations of beat reconciliation approaches and a conference paper describing the results.

Start Date: ASAP after the exam period (project must be complete by 29th July)

Duration: 8-10 weeks

Contact: Nicolas Gold

Funding: (EPSRC or CS Dept Bursary)


The Digital Troubadour[taken]

Medieval song has been studied for many years to understand the relationship between the texts and the music they are set to. Since recordings of medieval songs dating from the time they were written cannot exist, there is an interpretative gap between the notation and the accepted norms of performance. To help scholars explore the various ways in which performances might be undertaken, this project will develop software to generate performances of medieval songs and evaluate them under aesthetic and other criteria. The project will use the Cantoris web service API to render the vocal performances using the underlying Vocaloid library, and apply search algorithms to undertake an exploration and evaluation of the performance space in conjunction with human scholars. Software tools to help scholars explore and evaluate the space will also be generated. The project is part of an interdisciplinary collaboration between the Departments of Computer Science and English at UCL and the Department of Music at Royal Holloway.

The expected project outcomes will be algorithms for generating song performances based on performance norms, software to enable the exploration of these for scholars, and a conference paper describing the results.

Start Date: ASAP after the exam period (project must be complete by 29th July)

Duration: 8-10 weeks

Contact: Nicolas Gold

Funding: (EPSRC or CS Dept Bursary)


The Sound of Software[taken]

Understanding the source code of existing systems can represent the highest cost in the software lifecycle. Comprehension requires the creation of a mental model from the static source code, documentation, and execution traces among other things. Visualisation has been an important part of the comprehension toolkit for many years but only recently has auralisation begun to be explored as an alternative route to aiding the comprehension of software. This project will explore the potential of granular synthesis as a means to aid the comprehension of source code. Granular synthesis composes large-scale sounds out of tiny sound grains, naturally mirroring the large-scale behavior of programs as composed of small-scale source code statements. It may therefore offer a natural mapping for auralisation program behavior.

The expected project outcomes will be algorithms and mappings for various static and dynamic aspects of source code (e.g. dependence or execution pathways) into sounds, implementations of these that can be applied to programs under test, and a conference paper describing the results.

Start Date: ASAP after the exam period (project must be complete by 29th July)

Duration: 8-10 weeks

Contact: Nicolas Gold

Funding: (EPSRC or CS Dept Bursary)


Data mining for CiteSeeing[taken]

CiteSeeing.com is a citation database derived from a database of 1.5 million computer science publication at citeseerx.its.psu.edu. This database is very noisy. We would therefore like to improve the quality of the database and increase its coverage by cross referencing and importing additional information from GoogleScholar and DBLP (http://dblp.mpi-inf.mpg.de/dblp-mirror/index.php).

Start date: ASAP after exam period

Duration: 8 - 10 weeks

Contact: Ingemar Cox

Funding: EPSRC/CS bursary



Summer 2010

Data mining citation information for documents

Applications such as iTunes provide a database capability for managing person music collections. These applications require music files to be identified, i.e. artist, album, track names, etc. This information is seldom present in the mp3 file, but is instead provided by an online service. Documents, e.g. scientific articles in pdf format, do not have equivalent database management applications, in part because identification of the document, e.g. author, title, publisher, etc., is not present in the pdf file. We are developing a service to address this issue. A fingerprint is extracted from the document file and sent to an identification service. If the fingerprint is present in the database, the corresponding citation information is returned. If the fingerprint is absent from the database, the user is requested to provide the information. Clearly, such a service becomes increasingly useful as the size of the database increases.

The goal of this project is to data mine several public databases to (i) find scientific articles in pdf format, (ii) identify these documents, and (iii) add the document fingerprints and associated citation information to our database.

Start date: ASAP after exam period

Duration: 8 - 10 weeks

Contact: Ingemar Cox

Funding: CS bursary


Bibliographic database based on an online citation system

Applications such as iTunes provide a database capability for managing person music collections. These applications require music files to be identified, i.e. artist, album, track names, etc. This information is seldom present in the mp3 file, but is instead provided by an online service. Documents, e.g. scientific articles in pdf format, do not have equivalent database management applications, in part because identification of the document, e.g. author, title, publisher, etc., is not present in the pdf file. We are developing a service to address this issue. A fingerprint is extracted from the document file and sent to an identification service. If the fingerprint is present in the database, the corresponding citation information is returned. If the fingerprint is absent from the database, the user is requested to provide the information.

The goal of this project is to develop an application that demonstrates the utility of this service. For example, add an import facility to an open source project such as bibtex. The utility would find all pdf's in a user's directories, compute their fingerprints, add those present to a database, and then ask the use to input data for those documents not found.

Start date: ASAP after exam period

Duration: 8 - 10 weeks

Contact: Ingemar Cox

Funding: CS bursary


The Ultimate Zigbee Base Station

Wireless sensor networks often involve a "collection" pattern of data flow: a large field of wireless sensors takes many readings, which are collectively routed to an sensornet access point (SAP) connected to more powerful compute infrastrcture capable of processing the data. As a result, the wireless medium around the SAPs is frequently overloaded, and becomes the bottleneck that limits the time resolution of readings from the entire sensornet.

In this project, you will build the ultimate Zigbee base station that uses two cutting-edge techniques from the physical layer to blow away this bottleneck: successive interference cancellation (SIC), and MIMO (multiple-input, multiple-output) antenna signal processing. The first, SIC, allows the receiver to essentially subtract interference from an incoming signal, while the second, MIMO, allows a receiver to add the incoming signals from two attached antennas, "beamsteering" itself away from undesired signals. The trick, and open research question, is knowing when to use either technique (or both). You'll test your ideas using real sensornet "motes" talking to a software-defined radio SAP.

The ideal candidate for this project will have strong proclivities for building, and hacking real-world software-defined radio systems like GNU radio.

Start date: ASAP after exam period

Duration: 10--12 weeks

Contact: Kyle Jamieson

Funding: CS bursary


Faster, More Reliable Wireless Networks

Interference and wireless fading both cause losses on the wireless link from the access point (AP) to your computer when you download a file or fetch a web page over the 802.11 interface. But most of the time, you are using TCP, the Internet protocol for reliable data transfer, so the data rate you see is also impacted by loss on the "uplink" from your computer to the AP.

In a large-scale experimental wireless testbed in the department, you will investigate just how much uplink loss there is in real wireless LANs, and the extent to which this loss hinders application-level performance. You'll then build better bit-rate adaptation algorithms for coping with such loss.

The ideal candidate for this project will have strong proclivities for building, hacking, and measuring real-world wireless and computer systems.

Start date: ASAP after exam period

Duration: 10--12 weeks

Contact: Kyle Jamieson

Funding: CS bursary


Nurturing Social Networks Using Mobile Phones

Youngsters increasingly rely on social-networking websites (e.g., Facebook, Bebo) to maintain contact with each other. The use of such websites is becoming pervasive, thanks to social networking applications running directly on users’ mobile phones. In the past, we have proposed new ways of nurturing contacts by monitoring users’ activity with mobile phones, where activity is defined in terms of text messages/phone calls, as well as physical encounters captured by Bluetooth. In so doing, we have been able to recommend new friends to youngsters, based on their movement pattern and phone usage.

We now intend to extend the proposed mechanism so to monitor users’ social networking activity too (e.g., status changes on Facebook, tweets sent via Twitter). The aim is to analyse whether, based on social networking activity, one is able to track health of friendships (and alert users they may be neglecting their friendship), and make users aware of their mood (so that they take action and keep negative emotions under control).

The goal of this project is twofold: (a) first, to evaluate mood/social-network-health prediction algorithms on available datasets; (b) second, once the prediction algorithms have been fine-tuned, to implement them on a chosen mobile platform (e.g., Android).

Start date: ASAP after exam period

Duration: 8 - 10 weeks

Contact: Licia Capra

Funding: CS bursary


Context-Aware Search Engine via Online Social Networks

Users of mobile devices often have queries which are context-dependent and for which an answer should be given right away. For example: “who built the palace I am standing in front of?”, or “who is willing to share the 2-for-1 ticket offer I have?” and so on. These types of queries cannot be answered using traditional search engines like Google. Rather, online social networks are emerging as a viable means to provide real-time answers to context-dependent queries. For any given query of the above type, only a portion of an individual social network should be relied upon, based, for example, on users’ location and current status.

The goal of the project is twofold: (a) develop a native Android application, and a companion Facebook service, that will enable mobile users to ask context-aware queries to their “relevant” online social contacts; (b) perform a small-scale user-study to evaluate users’ perception of the technology.

Start date: ASAP after exam period

Duration: 8 - 10 weeks

Contact: Licia Capra

Funding: CS bursary



Summer 2009

Cancer drug discovery database and registration system

The Institute of Cancer Research is a leading centre for cancer drug discovery, with its own clinical trial facility at the Royal Marsden hospital. In the process of drug discovery and clinical trials, large amounts of clinical and chemical data is generated, stored and analysed.

This project focusses on building a functional prototype database to capture data relating to drug discovery programs, together with a user-friendly web interface to enable registration, auditing and analysis of this data.

The project will focus on two main areas: - Database infrastructure in Oracle: including data modelling and database design, Oracle auditing and security features - Use-case driven user interfaces: including requirement gathering, HCI consideration and web development

Start date: ASAP after exam period

Duration: 10 weeks

Contact: Bissan Al-Lazikani (bissan.al-lazikani@icr.ac.uk) or Mark Halling-Brown (mhallingbrown@icr.ac.uk), Computational Biology and Chemogenomics, CRUK Centere for Cancer Therapeutics, Institute of Cancer Research

Funding: ICR - CRUK Centre for Cancer Therapeutics


User interface development

Applications are invited from for a summer internship working on further development of the user interface for a computational fluid dynamics modelling computer programme. Applicants are expected to be competent in Microsoft Visual Basic (Studio) programming preferably with some knowledge of Fortran.

Start date: ASAP after exam period

Duration: 8 - 12 weeks

Contact: Professor Haroun Mahgerefteh; Department of Chemical Engineering; h.mahgerefteh@ucl.ac.uk; 020 7679 3835

Funding: Research group


(Project taken)Grid and Semantic computing for Cancer databases

Cancer researchers constantly generate, validate and interpret data. This data is likely to be distributed, use different formats, employ distinct terms for the same entity (e.g. carcinoma or tumour) or even the same term to refer to different entities (e.g. insulin may be a drug or a gene, depending on the context).

Grid computing is a networking environment where distributed computers are used cooperatively to solve a single problem that deals with large amounts of data and/or requires large computational power. Semantic web technology provides methods to represent, query and reason over data.

This project involves these two exciting and cutting-edge technologies, grid computing and semantic web, to handle cancer-related data.

The aim is to provide a way of searching distributed cancer-related databases considering the meaning of the data.

In order to achieved this goal, we have built software components that generate semantic representations of the distribute databases and support querying over these representations. This project aims at developing grid services based on these semantic components, and involves using technologies such as Java, Web Services, RDF, OWL.

Start date: ASAP after exam period

Duration: 8 - 10 weeks

Contact: Alejandra Gonzalez Beltran

Funding: CS bursary


(Project taken)Virtual Trading System

This is a project in the Financial Services area to work in a team to build a Virtual Trading System that can be used for people to trade (virtually) against each other. The project is over a 12 week period, working on a java web-based application.

Start date: ASAP after exam period

Duration: 12 weeks

Contact: Dan Brown

Funding: from research group


(Project taken)Leveraging Human Mobility and Social Network for Efficient Content Dissemination in MANETs

Project Description: Digital content (e.g., music, photos, videos) has become very easy to create and disseminate, thanks to the widespread success of powerful, networked handheld devices. As the volume of content being produced is increasing dramatically, it becomes crucial to be able to filter irrelevant information as early as possible; this is to avoid both network congestion and end-user overload. In order to achieve these goals, we have designed Habit, a protocol that leverages information about nodes' colocation (physical layer) and their social network (application layer) to efficiently disseminate content in mobile ad-hoc networks (MANETs). The efficiency of the protocol has been studied by means of simulation using a large set of real world traces. However, we have no information as to how the protocol would actually behave in a real setting.

The aim of the project is to develop an implementation of the Habit Protocol using the J2ME platform and for the Nokia N70 mobile phone series. A real testbed will be developed and trialed, in order to compare simulation results with actual ones. Additional measurements, in terms of resource (mainly battery) consumption, will be gathered and analysed.

Start date: ASAP after exam period

Duration: 8 - 10 weeks

Contact: Licia Capra

Funding: CS bursary


(Project taken)Time-Adaptive Collaborative Filtering for Improved Scalability

Project Description: Recommender systems based on collaborative filtering (CF) aim to accurately predict user tastes, by minimising the mean error achieved on hidden test sets of user ratings, after learning from a training set. However, deployed recommender systems do not operate on a static set of user ratings; rather, the underlying dataset is continuously growing and changing. System administrators are confronted with the problem of having to continuously tune the parameters calibrating their CF algorithm for best performance, both in terms of accuracy and scalability. We have performed an extensive temporal analysis of two rather different datasets, the Netflix dataset and the CiteULike dataset. The observations gathered suggest that time-adaptive choices of the size of users’ neighbourhoods and of the frequency of system updates can yield accurate predictions while dramatically reducing the computational overhead associated to such algorithms (thus improving scalability).

The aim of the project is to develop a method for the dynamic and automatic selection of the above mentioned CF parameters. The method will be compared to traditional ones where parameters are globally set, both on the Netflix and CiteULike datasets. An analysis of the results obtained, with respect to both accuracy and scalability, will be conducted.

Start date: ASAP after exam period

Duration: 8 - 10 weeks

Contact: Licia Capra

Funding: CS bursary


(Project taken)Summer UG studentships in Computer Vision applied to Biomedicine

The Pichaud Lab (www.pichaudlab.org.uk) is producing remarkable video sequences showing how the faceted eye of the fly is assembled as the fly embryo develops into an adult. These videos show an initial, apparently random, sheet of cells acquiring the regular symmetry of repeating hexagons. The order seems to come from each cell altering its shape and tugging on its neighbours, but this hypothesis needs to be tested. The significance of this research is that understanding development of the fly eye would help understand organ formation in human development.

There are two summer internship projects associated with this work. They would suit mathematically confident Computer Science UGs, interested in experiencing the interface between Computer Science and Life Sciences.

P1) Automated Analysis of cell sheet images

To allow testing of hypotheses about how the cells re-arrange themselves, each frame of the movie data needs to be converted from an array of pixel values into a graph-like data structure representing the locations, shapes and adjacencies of the cells. At present this is done by a laborious manual process in photoshop. The aim of this project is to prototype and evaluate some methods for doing this automatically. One approach to be tried is by fitting a Voronoi diagram to the image data. Mathematica will be the main computational tool used.

P2) Visualization and Analysis of movements of sheets of cells

Once a graph-like representation of the cells has been built, manually or automatically, the representation must be analysed to compute and visualize variables of interest. For example: how do the following change over time and space: cell areas, cell junction lengths, cell rotation and cell shear. For each such variable, algorithms must be written to compute them from the graph-like representation, and intuitive ways to visualize them must be found. Mathematica will be the main computational tool used.

The Häusser Lab (www.ucl.ac.uk/wibr/research/neuro/mh/) develop models of how brain cells ‘compute’ by analyzing neuronal wiring diagrams. They map out these wiring diagrams by meticulously slicing blocks of brain tissue and using electron microscopy to image the sectioned ‘wires’ (more properly termed axons). By re-assembling many 2-D images a 3-D image can be created. Tracing the axons through this block is difficult and time-consuming and an automated image analysis approach would be preferable.

P3) Automated extraction of neuronal membranes from EM images.

This project would continue developing an automated approach that is being developed under the supervision of Lewis Griffin. The previous student to work on this project developed an effective approach, based on machine learning of image cues, for inferring the presence of an axon membrane from the image data. This work needs to be extended so that the inferred pattern of membranes has a likely geometry – for example without holes. Mathematica will be the main computational tool used.

Start date: ASAP after exam period

Duration: 8 - 10 weeks

Contact: Lewis Griffin

Funding: CS bursary



Summer 2008

Mobile Augmented Reality System for Location-Based Awareness

The delivery of information to mobile users in an intuitive and hands-free manner has significant commercial potential in applications ranging from tourism, to emergency response, and even to maintenance and repair. One means of delivering this is through augmented reality: the user's position and orientation is tracked, and information is overlaid directly on the user's view of the real world. Because the system takes the load of interpreting information and presenting it to the user, it eliminates potentially erroneous transformations between coordinate systems. Augmented reality is already widely used in sports coverage, where it shows information such as names of players, flags of countries, and even adverts on billboards.

The aim of this project would be to develop an augmented reality-based mobile computer to provide map and location-based information to users walking through the streets of London. Given current hardware (GPS, inertial trackers, head mounted displays and a mobile computer), the candidate would be expected to assemble a computing system and develop, with the supervisor's guidance, a visualisation system to show information about the user's environment. Geographical data will be obtained from open sources such as OpenStreetMap.

Start date: ASAP after exam period

Duration: 8 - 10 weeks

Contact : Simon Julier

Funding : to be determined


Magnetic Disturbance Modelling in Urban Environments

Precision knowledge of orientation (attitude) is needed for many applications including mobile robotics and augmented reality. One attractive means of measuring attitude is to use a magnetometer: a 3 dimensional equivalent of a compass. Although magnetometers can compute direction with respect to magnetic north, their measurements are corrupted by two classes of noises: hard iron biases and soft iron biases. Hard iron biases are caused by permanent magnets. Soft iron biases are caused by ferromagnetic material which stretches and magnifies the earth's magnetic field. Together, these sources of errors can cause angular errors of more than 20 degrees, greatly reducing the effectiveness of the sensor.

The aim of this project is to develop attitude estimation algorithms which are more robust to the effects of magnetic disturbances. There are two main objectives. The first is to conduct a study to asses the magnitudes of the errors. Although the causes of disturbances are well-known, the types of values in an urban environment are not known and a survey would be carried out to quantify these values. Second, given the error models, a new algorithm will be developed which will account for more detailed information about magnetic disturbances. These algorithms could also exploit information about the type of disturbance to assist in estimating the position of the tracking system.

Start date: ASAP after exam period

Duration: 8 - 10 weeks

Contact : Simon Julier

Funding : to be determined


Distributed Sensor Network-Based Tracking System

One of the outcomes of the exponential growth in computing power is the development of motes. Motes are tiny, self-contained, battery-powered computers with radio links, which enable them to communicate and exchange data with one another, and to self-organize into ad hoc networks. Motes can be used to develop wireless sensor networks which can adapt and reconfigure themselves to environments. Motes have already been used in a variety of scientific and safety critical applications, and they are expected to become a technology even more ubiquitous than mobile phones.

Much research has concentrated on developing algorithms and techniques to support security, routing and coordination between motes. However, relatively few studies have considered the problem of performing complicated fusion and tracking operations on sensor motes.

The aim of this project is to develop a tracking system using a set of motes with onboard light sensors. These sensors are extremely simple and crude, and complicated signal processing must be carried out to generate reasonable results. There are three main objectives. First, the characteristics of the sensors must be "learned" using machine learning techniques. Second, algorithms must be implemented which will pass learned sensor data between the motes. Finally, the algorithms must be implemented and demonstrated on the sensor platform.

Start date: ASAP after exam period

Duration: 8 - 10 weeks

Contact : Simon Julier

Funding : to be determined


Two UG summer research internships: Computer Vision applied to Basic Bioscience

Background

In a collaboration between the UCL departments of Computer Science (Lewis Griffin, Senior Lecturer) and Physiology (Josef Kittler, Senior Research Fellow), supported by a CoMPLEX studentship and an EPSRC/MRC-funded pump-priming grant, methods of image analysis developed during an ongoing EPSRC-funded project are being used for the analysis of microscopy videos of neuronal receptor dynamics.

A pair of summer projects suitable for strong Comp Sci UG have been planned and are described below. Students working on these projects would gain general research experience, as well as particular experience in computer vision and life sciences interface research. In addition to project-related supervisions, and relevant internal and external seminars, they would regularly attend:

  • monthly Griffin lab meetings on Computer Vision
  • monthly Kittler lab meetings on Neurophysiology
  • monthly Griffin-Johnston study-group meetings on ‘Vision & Geometry’
  • weekly Alexander-Griffin-Kautz-Prince research seminars on ‘Imaging & Vision’

The two projects are described below:

Detecting Quantum Dots in microscopy videos using Basic Image Features

Quantum Dots (QDs) are a novel method of tagging individual biomolecules (e.g. receptors channels embedded within neuronal membranes) so that they can be tracked in optical microscopy. The student will extend a system for QD detection using Basic Image Features (BIFs). Specifically, the student will (i) develop an active learning approach to gathering manually-specified ground truth data for use in tuning the algorithm to the particular characteristics of the video, (ii) port the algorithm from Mathematica to the JImage library widely used by physiology labs.

Start Date : ASAP after exam period.

Duration : 8-10 weeks

Contact : Lewis Griffin

Funding : Under discussion

NOTE : PROJECT HAS BEEN TAKEN

Detecting Neuronal structures in microscopy videos using Basic Image Patterns

While QDs make tagged biomolecules relatively easy to localize and track, the background across which they move of cellular membranes and synaptic structures, made visible using more conventional fluorescent stains, present a complex variable appearance. Delineating these structures is essential for meaningful quantification of the dynamics of the tagged biomolecules, but is an extremely time-consuming process. In this studentship, methods which have been developed for object recognition will be applied to this problem. In particular a naïve Bayes classifer will be applied pixel-by-pixel to detect these structures. The classifier will be driven by the presence of Basic Image Patterns (BIPs) which are local configurations of BIFs, encoded as Templates, Histograms, Presences or Graphs.

Start Date: ASAP after exam period.

Duration : 8-10 weeks

Contact: Lewis Griffin

Funding: To be determined.

NOTE : PROJECT HAS BEEN TAKEN


Automated face registration for identity recognition using non-linear regression

Automated face recognition is of enormous commercial significance, but in current face recognition systems the subject is required to cooperate: they must stand in a certain place, face the camera and maintain a neutral expression. Under these controlled imaging conditions, face recognition algorithms perform well. One of the greatest remaining research challenges is to recognize faces in uncontrolled conditions. Now the subject may be entirely unaware of the system, and consequently the position, pose, illumination and expression of their face all exhibit considerable variation. In such uncontrolled conditions, current commercial and academic face recognition systems exhibit poor performance. In this project, we will develop methods that help enable recognition under these conditions. This will permit solutions to a plethora of real problems including:

  • Access control: Current face recognition systems require the implicit cooperation of the user. This research will remove this requirement and increase the efficiency, robustness and user-friendliness of access control applications.
  • Security footage: The UK has 4 million CCTV cameras, but current face recognition methods flounder because of the variable capture conditions. This research will permit automated analysis of faces in CCTV footage.
  • Face Search: Recognition methods fail on archived images because the faces have variable poses, illuminations and expressions. The proposed techniques are invariant to these factors and allow face search: users provide a probe face image and our methods can search the internet, or a set of photos for images of the same person.

In recent work we have developed a novel method for face recognition based on generative models. We have addressed face recognition across pose changes and currently have state of the art results in this area. However, our current methods (and all competing approaches) rely on the ability to find features (eyes, nose, mouth etc.) on the face and register to a standard template. Unfortunately, when the face is not frontal, at low resolution or under unusual lighting conditions, most algorithms to find these features fail. This in turn causes a catastrophic failure of the face recognition system.

Most current systems for finding features on the face rely on optimizing the parameters of a face template: this is a structured model that encodes both the local appearance of the face features and their relative positions. In optimization, the goal is to find the parameters of the template under which the model is most likely. These models work well for frontal faces under good viewing conditions but fail for images taken in uncontrolled conditions. They are prone to failure when (i) the initial position of the template is far from the true value (ii) the image does not appear in a stereotypical position, or is partially occluded in some way.

The goal of this project is to develop an alternative approach to feature finding which does not rely on a strong structural model for the face. It is inspired by recent approaches to vision based on regression. Rather than apply strong domain specific information to a given vision task we instead simply parameterize the input image information and desired output information and learn the relationship between the two using a complex non-linear model.

Start Date: July 1

Duration : 8-10 weeks

Contact: Simon Prince

Funding: To be determined.


Vessel-based medical image registration

Background

Image registration is an important field of image analysis, he aim of which is to align images by transforming them into a common co-ordinate system. This is useful in many applications, including image comparison, for example to quantify changes in tissue as a result of a disease process, and image fusion, where information from different images is combined into a single, more informative image. Blood vessels (arteries and veins) are often clearly identifiable in medical images of the same organ obtained using different imaging techniques. This property makes them a very useful anatomical feature on which to base a registration algorithm. However, soft-tissue deformation and variations in the grey-level brightness of blood vessels within and between different imaging modalities make vessel-based registration (especially for the case where vessels have moved and deformed between images) a challenging problem in general. This project will focus on investigating algorithms for automatic registration of blood vessels, which have many applications, but the focus will be on guiding surgical interventions.

Project Details

During the project, the student gain experience of working within an academic research environment and will learn advanced image processing techniques for enhancing blood vessels in medical images. The principal aim will be to experiment with applying a scale-space approach in which vessels are enhanced (i.e. their brightness in the output image is increased with respect to the background) and which the vessel local direction vectors is computed. Particular emphasis will be put on the application of vessel-based registration for guiding surgical interventions of the brain and liver, and the student will have access to a library of clinical images obtained using a variety of imaging modalities, including MRI, CT, x-ray angiography and ultrasound. The student will also be given the opportunity to see such images being acquired in a hospital and will therefore gain a wider understanding of the role medical imaging in clinical practice. The aim in the initial phase of the project will be to develop an algorithm for rigid registration of vessels and to test its performance using different sets of clinical images. This algorithm will assume that tissue does not deform between images and that images can be aligned using a rigid-body transformation, where only translations and rotations are allowed. The more general (and interesting!) problem of non-rigid registration, where vessels are allowed to deform, will be considered as the project progresses. The student will be strongly encouraged to attend seminars during the internship and present their own work. If appropriate, the student will also be encouraged to write a conference or journal paper on their work.

Ideally, the student taking up this internship will already be familiar with fundamental concepts of image registration and image processing and have a strong interest in medical imaging and research. The student should also be familiar with Matlab and be comfortable with the mathematical aspects of image and signal processing.

Contact: Dr. Dean Barratt (d.barratt@ucl.ac.uk)

Start date: ASAP after exams

Funding: Full funding is available


University College London - Gower Street - London - WC1E 6BT - Telephone: +44 (0)20 7679 2000 - Copyright © 1999-2005 UCL


Search by Google