You Are Here: Home » AVR ATmega Projects » Sound - Audio Projects » Dueling Banjos Using Atmega32

Dueling Banjos Using Atmega32




Introduction

Our project was to create two individual microcontrollers that can play banjo notes cooperatively to play two-part songs using nothing but sound to communicate and synchronize.

Dueling Banjos Using Atmega32

Humans have had the ability to synchronize musical instruments together to achieve a single, coordinated multi-part song for thousands of years. In our project we attempted to simulate the necessary communication and timing aspects needed to play musical instruments in a group using two independent microcontrollers. In order to accomplish this task, three main goals needed to be met. First, we needed a way to allow each microcontroller to tell when its partner emits sound. Second, a communication protocol needed to be established so that the two units could agree on what song to play and when to start. Third, we needed to create a method for digitally synthesizing a banjo waveform and then outputting an analog signal to speakers. With these core functionalities, we were able to construct a simple system that replicated the fundamental communication and synchronization needed to play music together

High Level Design

Motivation

We were initially intrigued by a project idea posted on the course web page that involved several independent microcontrollers emitting random yet synchronized ultrasound bursts. We really liked the idea of coordinating independent nodes of a network in real time, and working together to accomplish a common goal. We then thought it would be interesting if we were able to apply the nodal synchronization to music. With some help from Morgan, our TA, and from the dueling banjos scene in ‘Deliverance’ we ultimately came up with the idea to acoustically synchronize two microcontrollers to play (and duel with) banjo songs.

Background Math

Although this project does not require much complicated math, there was one major mathematical method that we used in our implementation: the Karplus-Strong Algorithm. The Karplus-Strong Algorithm is an algorithm that is able to digitally generate a clean and realistic acoustic sound. To generate a given string pluck, our implementation of KSA uses a fixed length shift register to represent a string. Values are then fed into the array and as they are shifted through the array, they simulate how a wave would traverse a string in the real world. The values at the end of the shift register are sampled and then are passed through a digital low pass filter, and reinserted into the beginning of the shift register. This method allows the values in the shift register to resonate, and the decreases due to the filter represent the physical losses in a real string

KarplusStrongAlgo

To vary the frequency of the notes generated by the KSA, only the size of the shift register needs to be changed. Since the values move through the shift register at the same speed as the sampling frequency, the resultant period of the signal is T = N*Ts, where N is the size of the shift register and Ts is the sampling period. This results in the equation F = Fs/N, where F is the frequency of the note being played, Fs is the sampling frequency and N is again the size of the array.

StructureDueling Banjos Using Atmega32 Schemetic

Our final project relies on the interaction of four major components. Each microcontroller samples the analog signal of a microphone circuit to get information on sounds in the environment. This raw data is then passed to the decode stage of our design. In the decoding stage, the microphone data is filtered and interpreted. Depending on the mode, this stage will either convert sequences of sinusoidal sound into bits or it will simply detect the pluck of a string. Once the input has been interpreted, this data is passed along to the digital sound synthesis stage. Depending on the mode, this stage will create an 8-bit representation of either a sinusoidal signal or it will generate a banjo waveform with by applying the Karplus-Strong Algorithm. This synthesized 8-bit representation is then passed into the sound generation stage. This stage consists of a simple resistor based digital to analog converter, which turns the digital output into a single analog voltage that is then applied to the input of a speaker. By following this structure, we are able to manipulate the sound output of each microcontroller based on what sounds the microphone registers.

Parts List

PartQuantityCostTotalSource
Atmel ATMega322$8.00$16.00ECE 476 Lab
Custom Mega32  PCB2$5.00$10.00ECE 476 Lab
Power Supply2$5.00$10.00ECE 476 Lab
Keypad2$5.00$10.00ECE 476 Lab
Large Solder Board2$2.50$5.00ECE 476 Lab
Microphone2$1.01$2.02Digi-Key
Speakers1$5.00$5.00ECE 476 Lab
Female Headphone Jack1Owned
Op Amp (LM358)4$0.15$0.60ECE 476 Lab
40-pin dip socket2$0.50$1.00ECE 476 Lab
Header Machine Pins92$0.05$4.60ECE 476 Lab
Capacitors, Resistors, WiresECE 476 Lab
TOTAL  64.22 

For more detail: Dueling Banjos Using Atmega32

Leave a Comment

You must be logged in to post a comment.

Read previous post:
Intelligent wireless pedometer Using Atmega32

Introduction For our ECE 476 Final Project, we have built an intelligent, wearable pedometer. This wireless pedometer can calculate many...

Close
Scroll to top