Electrical and Computer Engineering Faculty Projects 

Human-Robot Collaboration

Ryder Winck, Carlotta Berry, Michael Wollowski, Yosi Shibberu

The long-term goal of this project is to allow a human and robot arm to seamlessly collaborate to achieve a task (e.g. put objects in boxes, tie a knot, solder PCBs, etc.). The robot arm could be thought of as serving as an assistant to the human or as providing a third arm for the human. One of the keys to enabling this level of interaction is being able to detect human action and determine human intent. To reach that goal, we are currently developing a testbed to enable research on human-robot interaction. The testbed will initially involve a bench-top robot arm, sensors to track human motion, natural language processing, and software to integrate these systems. Immediate tasks involve: setting up a Robai Cyton robot arm; programming a Leap Motion to track human movement; programming natural language processing using Stanford’s NLP toolkit, and programming to perform probabilistic reasoning based on the human speech and gesture. Students with an interest in image processing, computer vision, artificial intelligence, mechatronics, robotics, probabilistic reasoning, cognitive science, big data, or natural language processing are encouraged to get involved.

DANI Robot Demo 

Professor Carlotta Berry

This project involves using LABView to create demonstration applications on two new DANI robots. The student would first have to become familiar with LABView and the basics of mobile robotics and then design a demonstration application that uses the two mobile robot platforms.  The demonstration application would be similar to an interactive game where a user remotely commands the robot to achieve some task (similar to an Andriod APP). Ideally this goal or mission would involve vision, light and/or sensing heat while also avoiding obstacles or having some other basic behavior. 

Compressed Sensing for Geolocation of Radio Sources

Professor Kurt Bryan (MA)
Professor Deborah Walter (ECE) 

Compressed sensing (CS) is a new computational methodology that allows one to extract far more information from certain types of data than was thought possible using classical techniques.  The topic has connections to mathematics (especially linear algebra), computer science, and signal or image processing.  CS has shown great promise in many applications, and is currently a red-hot research area. One application in which CS seems to work well is that of locating a radio frequency (RF) source using data collected from a fairly small number of radio receivers with very simple antennas.  In conjunction with the Air Force Research Lab, we have made some progress on this problem in the case in which one or perhaps a few RF sources on the ground must be located using receivers mounted on unmanned aerial vehicles (UAVs), but much work remains to be done.

This project would, at the least, involve using an existing Matlab code and GUI to run simulations in order to understand when a CS approach to this problem is likely to be successful, e.g., how many RF sources can we locate using five UAVs?  What if the RF sources are in motion, or transmit intermittently?  The project could also involve developing new algorithms and incorporating them into the code, or carrying out a more rigorous analysis of existing algorithms.

If you accept this challenge, you will be working with an interdisciplinary team of researchers that may include professors, students, and professional engineers from Rose-Hulman, the Air Force Institute of Technology, and the Air Force Research Lab.  We anticipate that there may be future opportunities to take part in the development of a physical test-bed, conducting experiments, and applying algorithms to experimental data. This may result in an opportunity to focus your work into a senior math thesis, an engineering master's thesis, and/or a conference paper.

SLAM technique for simultaneous localization and mapping

Professor Deborah Walter

Today’s interconnected wireless world has led to a potential spectrum scarcity problem. The Air Force requires real-time radio-frequency (RF) situational awareness to support spectrum management protocols and enable cognitive radio networks. Sensing techniques, such as energy detection and cyclostationary feature detection for the determination of primary users and spectrum holes, are fundamental elements in a cognitive radio network. In more sophisticated cognitive radio network, the determination of the location of RF signals provides information that could be exploited in order to mitigate interference signals or improve the overall sensing process. Algorithms have been proposed that use a cooperative sensor network to geolocate RF signals. The problem is particularly challenging when GPS is not available for the localization, timing, and coordination of the sensor nodes.

The goal of this project will be to implement the SLAM technique for simultaneous localization and mapping in a Matlab environment. This project will be conducted in conjunction with related projects to develop and test novel algorithms for a distributive, cooperative sensor system. The algorithm can be tested using a prototype network of sensors currently being developed. Further investigate should focus on how SLAM can be incorporated with Compressed Sensing for geolocation in a GPS denied environment.

Design of an ultrasound-based remote controlled car for sophomore competition

Professor Mario Simoni

I would like to design a small toy car whose velocity can be controlled remotely using an ultrasound transceiver. The whole system should be able to be built from standard analog and digital components. The idea is to hand a group of sophomore ECE students a "kit" of parts and allow them to design and build the whole system in one weekend. The final goal is to design the system and specify what a reasonable "kit" of parts should be.

Centrally controlled power to Career Services Monitors

Professor Mario Simoni

The career services department operates a number of monitors around campus to display important information. It would be nice to shut down these monitors at the end of each day to conserve electricity without having to travel around campus and manually turn off each one. The goal of this project would be to devise a method for centrally controlling the power to each of these monitors.

Design of a neural simulation processor

Professor Mario Simoni

I would like to design a highly specialized CPU that can simulate the complex biochemical processes that occur in neurons. Then I want to compare the performance of this CPU to a more general purpose CPU. Beginning work has already been done on coding some of the building blocks in VHDL. I would like to have a student continue with the design and run the system on an FPGA. I would also like to run a general purpose CPU on the same FPGA and compare the results of the two systems.

Software interfaces for Electronic Music Synthesis

Professor Ed Doering

Background and Motivation: A digital audio workstation (DAW) constitutes the backbone of a contemporary music recording and production studio. Implemented as a software application, a DAW achieves enormous flexibility through the use of "plug-ins" embodied as dynamic link libraries (DLLs). Plug-ins implement audio signal processors and virtual synthesizer instruments. Students taking Electronic Music Synthesis (ECE481) create sounds and process signals with LabVIEW, a language they have mastered in a prerequisite DSP course. The term project at the end of the course would be greatly enhanced if students could use LabVIEW to transform their algorithms into a plug-in compatible with the Virtual Studio Technology (VST) standard recognized by all DAW applications.

Problem Statement: Create the software infrastructure necessary to transform an arbitrary audio signal generator or processor written in LabVIEW into a VST-compatible plug-in.

Approach: A VST software developers kit (SDK) is freely available for the C language. LabVIEW also provides a mechanism to interface with C code. This project would involve mastering the VST SDK, learning how to interface to a LabVIEW virtual instrument (VI), producing a DLL file that conforms to the VST standard, and testing the DLLs to ensure compatibility with the DAWs used by students in ECE481.

Realtime adaptive microphone array

Professor Wayne Padgett

Thanks to a grant from Rockwell Collins, I currently have all the equipment to build a 16 channel microphone array using National Instruments hardware.  The system should be configurable to implement beamforming algorithms in realtime on the CRIO FPGA.  I would like to work with a student or two to build and program the system for several possible array geometries and evaluate the performance. 

Wireless Sensor Networks

Professor JianJian Song

Wireless sensor networks are used to collect data from sensors located at each node and to relay that information back to a central point for analysis.  A node in a sensor network is a device that has the ability to take the desired measurements and relay that information for collection.  The network portion of a sensor network is not the communications channel used by the nodes for relaying information, but rather the data collection entity created by a collection of sensor nodes.

An example of a simple and common sensor network is a commercial fire alarm system.  Smoke detectors are located throughout a building.  When one or more of these sensors detects smoke, a central control panel takes such actions as to notify emergency personnel or trigger alarms.  This type of sensor network has each node hard-wired to a central point and has no intelligence in each individual sensor.  This is acceptable in a static environment such as a building, but is unacceptable for an environment where the sensor nodes must be able to be deployed rapidly and tolerant to changes in the environment.  One solution is a distributed sensor network.

A distributed sensor network has several requirements.  It must be able to be deployed without planning an exact role for each node and be capable of handling hundreds of individual nodes deployed over a large area and, therefore, tolerant of the addition and removal of nodes.  Also, the network must create its own communication infrastructure after deployment so that it can forward data from each node to a central collection point, and continue to function for days, weeks, or even months without outside intervention in a hostile environment.

These requirements imply several abilities of each sensor node.  Each node must communicate wirelessly and be able to store data and forward it at the appropriate time.  In addition, it will be battery powered and must function off a limited power supply for the length of time required by the application.  Each node is also interchangeable with any other node (i.e., all nodes are the same) and must be inexpensive enough to be deployed into a hostile environment without expectation of retrieval. 

Software challenges in autonomous vehicle navigation

Professors David Mutchler (CSSE)
Professor JP Mellor (CSSE)
Professor Carlotta Berry (ECE)
Professor David Fisher (ME)

In this project, students will design and develop software to solve challenges in navigation faced by autonomous vehicles (robots).  The problems are real problems faced by real robots in a real competition.


The software to be developed is for challenges faced by a particular pair of robots that will be entered in the 2012 Intelligent Ground Vehicles Competition (IGVC), although the software will be designed to apply to other robots as well.  In that competition, robots navigate a football-sized field laced with obstacles, as suggested by the diagram above.  The robot arrives at the competition knowing the general nature of the course but not its specifics.


The particular robots of interest are:

  • Husky A200: a commercial robot whose chassis (wheels, motors, body, power, etc) is complete.  Students in this project will choose and attach sensors (with help from the Rose-Hulman Robotics Team) and write software for it.
  • Moxom's Master:  the Robotics Team's current robot (they are beginning the design of a 2nd-generation robot).

Students in this project will receive credit for CSSE 290, Software Challenges in Autonomous Vehicle Navigation.  They will attend the IGVC 2012 competition where their software will be used in the above robots.  Results of their research will be reported at that competition, at one or more conferences in Robotics, and at one or more conferences in Engineering Education.

Our Sponsors: