Making Refinements to the Old Project
I put together the Color Organ Triple Deluxe over a year ago. It was a bare minimum version of color organ circuit using LEDs instead of incandescent lamps that traditional color organs use.
The circuit worked pretty well, considering the simplicity of the circuit. However I just kept feeling like this project deserves further refinements. So I went back to the drawing board (or breadboard) and took a hard look at the circuit…
The result? Please take a look at the video.
There were a few problems. The transistors in the circuit was biased in the way that it was supply voltage dependent, as well as device dependent – in other words, if the voltage was too high or too low, or the transistors had a bit of different characteristics, the circuit did not perform well.
The filter performance was also a bit poor – the separation between the frequency bands were not so great.
First, I changed the initial gain stage from single transistor design to two transistor design. It’s a basic class-A common emitter amplifier followed by an emitter follower. They are direct coupled for optimum performance, as well as reduced part count (always important for me to design circuits with the least number of parts). Adding the emitter follower stage allowed the low output impedance needed for the filters to perform well. The biasing circuit was also revised to be less device and voltage dependent.
Second, the filters were refined to have better separations. Input and output impedance to the filters are better matched to achieve better efficiency as well.
Third, the LED driver circuits were given another transistor. Actually, in the original design, the output buffer and the LED driver was done by the same transistors. Now the filter outputs are buffered by emitter followers, then the filtered audio waves are rectified before going into the LED drivers.
Those changes made a huge difference. And I tweaked the component values obsessively to get the best performance. Sensitivity adjustment control is also added.
There are many additional parts compared to the earlier version, but the result is totally worth it. The LEDs now respond to music very, very nicely.
Here are the circuit schematics, BOM, as well as the PCB layout. The filter response graph is also shown. Keep in mind that the graph is more of a perceptual one than actual.
The circuit is loosely based on the many vintage circuits before it, with a few improvements.
The input buffer/gain stage is designed to have low output impedance. This is important for the filter stages that follow. This stage is also designed to give high gain and maximum output signal level, since the filters are of passive type so will lose some signal.
(This amplifier stage took me the most time to design. I tried out many topologies and parameters. I think I found the best balance between simplicity, stability and performance. Unlike using op-amp, designing amplifier with transistors is an art of compromise.)
The use of emitter follower as rectifier is my original idea. (Q3, 5 and 7) Combined with the bias point set (by R8 and 9 and so on) right below the point the LED driver turns on makes this color organ very sensitive to the lower volume of audio input, while eliminating the diodes typically used here.
All resistors are 1/8W (or higher) carbon film type, 5% precision. Small capacitors are film type, and 0.22uF and above are electrolytic type having voltage rating of 16V or higher.
This type of analog circuits tend to be picky about the part values, so it’s best not to change out resistor values, etc. unless you know what you are doing.
Resister and capacitor types are not very critical, so just use any type you might have. Using ceramic capacitors instead of film for example, is fine.
I used MPS2222A transistor, which can be substituted by number of general purpose transistors of similar specs. The ones I tested are 2N4400, 2N4401, and 2N3904.
Q1 is more critical than other transistors in this circuit. The biasing is adjusted for transistors having the hfe around 200. If you use different transistor, you might want to check the voltage at Q1 collector – the voltage here should be between 4.5 to 6V when 12V supply is applied. Adjust R5 or try different transistors for Q1 if it’s too high or low.
PCB layout is provided as PDF for home brew PCB makers. It’s a single layer design, so it should be easy to make your own.
Kits and PCBs
Kits and PCBs of this project are available at my website.
There are 8 transistors, and many resistors, capacitors and LEDs, but the assembly is very straightforward as they are all familier through hole parts (and no ICs). In a way, Color Organ Triple Deluxe II is built like the circuits from the 70’s. If you are like me, you will appreciate the modern-vintage feel of all discrete component design.
I recommend soldering the lower profile parts, first, then move on to taller and taller parts. I arranged the BOM in the order of soldering below:
Notes on Solder Resin/Flux
Some solder resin/flux is electrically conductive. (resin or flux is inside solder wire to help solder to adhere to the joints) Some parts of Color Organ Triple Deluxe II are very sensitive to even a tiny amount of electrical leakage caused by soldering resin/flux. If the LEDs on Color Organ Triple Deluxe II stays lit without any sound signal coming in, you need to clean the PCB to remove the resin/flux.
“No Clean” type flux cause no problems (as the name implies), but more typical resin type flux can cause good amount of leakage, and cleaning might be required.
You can use an acid brush or an old toothbrush immersed in rubbing alcohol to scrub the back of the PCB. Rinse out the brush, wet with alcohol again and scrub another round or two until all the resin residue is gone. Make sure to dry the PCB completely before connecting to the power supply.
Color Organ Triple Deluxe II is designed to run by 12V DC power supply. The circuit works pretty ok with 9V power, though. However 9V battery is not recommended as a power source because of the relatively high current draw (about 25mA at idle).
It’s best to connect to a regulated 12V DC power supply. Be careful if you want to use a typical wall wart – they can output much higher voltage than they are rated – sometimes as high as 18V from a 12V one. Color Organ Triple Deluxe II can operate safely from up to about 15V power. (If you want to use non-regulated AC adaptor, try a 9V rated one – they typically produce around 13V).
Audio source can be any “line level” output from audio equipment, or headphone output from computer sound cards and iPod/MP3 players. If you want to listen to the music while using Color Organ Triple Deluxe II, you might need a splitter cable.
Connect Color Organ Triple Deluxe II to your audio source of choice, and give it a play. I found music with good amount of beats to give best results. Adjust the potentiometer (sensitivity level) according to the sound level.
The LEDs react to the sound volume in a pretty linear manner that it feels like the Color Organ is translating sound into light.
The light out of the LEDs are blinding bright. You can use Color Organ Triple Deluxe II as a wall wash – project the light towards walls or ceiling and dim the lights in the room.
You will discover a new joy of listing to music.
The Solder : Time™ watch kit.
The Solder : Time is an original design watch kit that you solder and assemble yourself. Delivered as a through-hole kit, you solder the components to the PCB, and enclose it all in four layers of laser cut acrylic.
Introduced at Maker Faire Bay Area in Mid 2011, an instant hit. The supplied battery lasts a really long time, and the onboard Dallas RTC keeps impeccable time. This is the first SpikenzieLabs Kit that is a ‘wearable’.
Solder : Time is not only a wristwatch. Set it up as a desk clock, badge clip-it to your clothes, thread a chain, and you’ve got a pocket watch.
More advanced tinkerers will see the bottom side I2C lines broken out, for hacking, and integrating RTC into other projects. There are also pads on the backside of the PCB for DC supply, as well as an ‘always-on’ function.
Start by bending the leads to all three resistors to look like these in the photo.
Solder them in place and trim the leads. They go in either direction.
The crystal for the real time clock is the very small silver part with two leads.
Solder this as in the photo.
IC and Capacitors:
1.The notches on the two IC chips (RTC and PIC) must match-up with the notches printed on the white silk screen.
2.These two ICs are soldered directly onto the PCB. For best results, only apply heat from your soldering iron for 1 to 3 seconds per leg.
Solder the two orange-yellow capacitors in place.
Trim all leads.
Place the 4 digit 7 segment LED onto the PCB. Push flat, making sure all the legs go through the holes.
NOTE: Make sure that the LED is installed up-side right. Check that the decimal points are on the bottom as in this photo.
Solder and trim the leads.
You may want to peel the thin clear protective layer from the LED. Use your finger nail to lift an edge and then gently peel it off.
Push the button into the PCB. It should almost click into place.
Solder and trim the leads.
Assemble the watch body:
The battery holder is very light and likes to move around as you solder it. Solder it in place with the open edge facing down.
Taping the battery holder in place, with masking tape, while soldering it works best.
Another way to solder the battery holder without tape is to heat the pad and battery holder by placing the tip of your soldering iron on the PCB’s battery pad and just barely touching the battery holder. After a second or two add some solder. Once one pad is soldered, do the other one, and come back to add more solder to the first one if required.
I’ve tried soldering a blob of solder onto one of the pads, and then heating the battery holder … this didn’t work.
Building the circuit board: This is an easy to solder kit.
Peel the protective layer on the plastic parts:
In order to protect the plastic parts during manufacturing they come covered by a thin plastic / tape layer. These should be removed before you assemble the watch.
1.Only use something soft like your finger nail to scrape the edge of the protective layer and then peel it off.
2.VERY IMPORTANT: Some of the layers have very thin parts that can crack easily. The best way we have found to peel the protective layer off of these parts is to hold the part down evenly on a flat surface with one hand and peel with the other hand. Holding the part in the air while you peel may snap the part you are peeling. (Don’t worry after the watch is assembled and screwed together it is very strong.)
Stacking the watch:
With the battery in the battery holder start making a stack of parts.
Close up of button details:
Slide the battery into the battery holder on the Solder : Time PCB with the CR2032 “+” label text facing up. When the battery goes into the holder the watch should turn on and display 12:00. If not, remove the battery and check your work. After about five seconds the display will go out, this is normal.
NOTE: As tempting as it may be, don’t touch the battery holder while soldering – you will burn your finger!
5.Place the front face of the Solder : Time on the top of the stack.
6.Using your fingers, screw the layers together with the included screws. Don’t tighten them fully until all four screws are installed (this will help you align the layers).
7.Almost done, slide the wrist band in from the bottom edge, under the bottom and up through the other hole in the bottom and out the top edge.
1.Start with the plastic watch back oriented with the two larger holes at the top and bottom and place the PCB over it.
2.Next, place the plastic PCB layer part around the PCB, with the open end facing down.
3.Place the plastic switch layer on top of the PCB layer plastic. The opening for the switch is on the right side.
4.Place the switch lever into the space on the right side of the Switch Layer. Make sure that the switch touches the Solder : Time button and does not bind. Test it, the Solder : Time should turn on when pressed and off after a few seconds when released.
How does it work ?:
When we came up with the idea for the Solder : Time, one of our big concerns was battery life. We knew that we would need a RTC and some type of micro-controller. After experimenting with a bunch of different RTCs we decided to go with the Dallas Semiconductors DS1337+. This RTC clock runs at over a range of voltages which included our required voltage of 3v. It has extremely low current when in standby mode and uses I2C to communicate with the master micro-controller.
For the micro-controller we chose the PIC16F631. This IC has only a few peripheral functions built in (with keeps the cost down), and since we didn’t need many this also saves some power. The PIC16F631 has a very low power sleep mode, has enough pins for our project (and only one spare one), and also runs at 3volts. The PIC16F631 does not have built in I2C, so we used a bit banged version to control the RTC.
In order to maximize battery life, we also used a couple of tricks. For the I2C bus we used higher value pull-up resistors (10k vs 4.7k) to reduce current and for the one unused pin on the PIC we set it to be a low logic value output (Microchip’s spec sheets, recommend this as a power saving set up). In testing the watch’s current draw, we estimated that given the average power of a CR2032 battery, the watch should keep the time for five years, before needing a battery swap in stand-by mode. Overall battery life will depend on how often the display is turned on.
Seven Segment Display:
The LEDs: One of the objectives when designing the Solder : Time was to make sure that all of the segments of the digits had equal brightness. In some cases only two segments are lit, as is the case for the number “1”, When the “8” is on, all seven segments are lit. Battery power was also a concern, the battery we chose for this project is the CR2032 which is a 3v low current battery. Our solution was to light only one single segment at a time, this way if the digit being displayed was a “1” or an “8” all the segments would be equally bright and we wouldn’t over tax the battery by drawing too much current. (If you wave the watch in the dark, you may be able to see the flashing pattern.)
Use a timer:
To light only one segment at a time we used one of the internal timer peripherals in the PIC. A timer may also be used as a counter, but in this case we used it as a timer to time the on time of each segment. When the timer runs out, the PIC programming jumps into a interrupt routine. To display a number, the PIC looks up the segments in a table that stores the on and off values for the segments. It starts with the first segment of whichever number it is displaying and then turns the segment on (or not, if that segment isn’t on). After this, the program returns to the normal main loop of the program and waits for the timer to run out again. When it does, the PIC turns on the next segment of the current digit. Even if the segment of the current digit is not turned on, the timer still waits.
After all of the segments of a digit are displayed, the next digit is displayed, and after the last digit is displayed the colon is displayed and then it starts over at the first digit.
There is a special case when displaying the colon. It flashes when in time setting mode. It took a few tries to get the flashing to look good and steady in the three states of the time setting mode; slow forward, fast forward and idle. To achieve a steady flash rate we used another timer. This timer simply toggles the colon on or off ever time that it times out, then when it is time to display the colon it is simply lit or not.
For the Solder : Time’s sleep mode we use a manual counter variable. This variable is incremented every time the program goes through a loop, and when it over flows (gets too big for the size of the variables holder) a sleep flag is set and the watch goes to sleep. If the button is pressed before the watch goes into sleep mode, the sleep counter is reset to zero and counting starts again. This way, the watch will go to sleep consistently after about 5 seconds from the last time you pressed the button.
Before the watch goes to sleep it turns off the LEDs. The watch button is set to wake the watch up if it is pressed. Nothing is done to the RTC since it goes into stand-by mode when no data is being transfered and it simply keeps the time.
If you’re like me and you’ve decided to take the plunge from EAGLE PCB to KiCad it can be really jarring. EAGLE had many quirks and rough edges that I’m sure I cursed when I first learned it back in 2005. Since then EAGLE has become a second language to me and I’ve forgotten all the hard bits. So as you migrate to KiCad remember to take breaks and breathe (and say ‘Key-CAD’ in your head). You’ll be dreaming in KiCad in no time!
This tutorial will walk you through a KiCad example project from schematic capture to PCB layout. We’ll also touch on library linking, editing, and creation. We’ll also export our PCB to gerbers so the board can be fabricated.
While this tutorial is aimed at beginners I am going to use terms such as ‘schematic components’ and ‘polygon pours’. If something doesn’t make sense that’s ok, just take a moment to do a quick search. If you really get stuck please use the comments section on the right. We always want to improve our tutorials to make them easier.
Let’s get started! Head over to KiCad’s download page and download the latest version of the software for your specific platform:
Once installed, run KiCad. A main navigation window will display where you will be able to open all the periphery programs like schematic capture and PCB layout.
Click the image for a closer look
The KiCad project window looks quite empty and sad. Let’s open an example!
The ZOPT2201 UV sensor designed originally in SparkX is a great I2C UV Index sensor and will serve as our starting example for this tutorial. Download the ZOPT220x UV Sensor Breakout designs for KiCad and unzip the four files into a local directory:
Once the four files are located in a local directory (try looking in your downloads folder for …\ZOPT220x_UV_Sensor_Breakout-Tutorial), click File -> Open Project and open the ZOPT220x UV Sensor Breakout.profile.
Click the image for a closer look
What are all these files?
These four files are all you need to share a KiCad design with a fellow collaborator. You may also need to share a footprint file, which will be explained more later on in this tutorial.
You may have had your first critical-judgment-eye-squint. Why is there a file to define which footprints go with which schematic components? This is fundamental to KiCad and is very different from how EAGLE works. It’s not a bad thing, just different.
Double click on the schematic file with Kicad’s Eeschema schematic editor. You’ll probably get an error:
Ignore this for now. Click ‘Close’.
The schematic will load with lots of components with question marks (i.e.??). KiCad is missing the link to the devices within this schematic. Let’s get them linked!
From within EeSchema, click on Preferences -> Component Libraries. This will open a new window. In the image below you can see that the project file contains information about where it should look for “Component library files”. Each project has its own connections to different file structures. We need to tell this project where to find the symbols for this schematic.
We will need the SparkFun_SchematicComponents.lib file. Download and store it in a local directory:
From the KiCad window, click the top ‘Add’ button. We’ll show you how to create your own schematic symbols in a bit.
Navigate to the directory where you stored the SparkFun_SchematicComponents.lib file and click ‘Open’. This file contains all the schematic components.
Once you’ve added the SparkFun schematic components library file, you should see it added to the list.
The astute will note the slightly different directory structure in the window:
That’s the difference between my home PC and my work PC. To avoid future errors when opening this schematic, let’s remove the entry from the active library files. Highlight the C:\Users\Nathan… entry from the list and click on the ‘Remove’ button.
Click on ‘OK’ to close out the Component Libraries manager. Now close and re-open the schematic to refresh.
Congrats! No more ?? boxes. For more information about using schematic component libraries across multiple computers, check the next subsection about the “user defined search path.” Otherwise, let’s start editing the schematic!
The schematic component libraries are assigned using KiCad’s Component Library Manager. If you’re like me and have schematic libraries shared across multiple computers, adding a “User defined search path” is helpful:
In the image , I have “..\..\SparkFun-KiCad-Libraries” defined. This is the local relative path to a Dropbox folder. These component library paths are specific to this project and *.pro file. When I open this project on my laptop, it will first look for the files in the “C:\Users\nathan.seidle…” location. It will fail and then search the relative path of “ ..\SparkFun-KiCad-Libraries” and find the files. It allows me to share libs between computers and between GitHub repos without having to reassign the libraries every time I open the project on a different computer.
For now, you should continue with the tutorial. In the future, you may want to revisit this if you use KiCad across multiple computers.
If I get you to do nothing else, I will get you to learn the keyboard shortcuts! Yes, you can click on the equivalent buttons. However, the speed and efficiency of KiCad really shines when muscle memory kicks in so start memorizing. Here are the keyboard shortcuts in KiCad’s Eeschema that we will be using frequently in this tutorial:
This breakout board needs a larger 4.7uF decoupling cap (because I say so). Let’s add it!
Press ‘a’ to add a device to the schematic. This will open the component window. (If you are using a different tool you may need to click on the schematic as well):
There are hundreds of components (668 items according to the title bar). Feel free to dig around but to quickly find what we need type ‘cap’ into the field Filter:. Select the device labeled as C_Small from the device library. Then hit enter or click ‘OK’.
Place it on the schematic next to the 0.1uF cap.
After you place the capacitor, you’ll notice you’re still in placement mode. Hit the ‘Esc’ button on your keyboard to return to normal pointer mode. I find myself hitting escape twice a lot just to be sure I’m back in default state.
Once in default state put your mouse pointer on top of the 3.3V marker on the 0.1uF cap. Press ‘c’ to copy that device and place it above the new capacitor.
Do the same for the ground marker. Press ‘ctrl+s’ to save your work.
Now let’s wire them together. You guessed it, press ‘w’ but here’s the catch: have your mouse pointer over one of the bubbles before you press ‘w’.
Move your mouse to the other bubble and left click on the mouse to complete the wiring for GND. Remember if you mess up, press ‘Esc’ once or twice to return to default. Then move your mouse pointer to the bubble you want to connect and press ‘w’ and begin wiring 3.3V. The shortcut ‘w’ stands for wire.
Did something go wrong? Use ‘ctrl+z’ liberally to undo any mistakes.
Power and ground are now connected to our capacitor.
Let’s change the value from C_Small to 4.7uF. Hover the mouse pointer over C_Small and press ‘v’ (for value change). Change C_Small in the Text field by typing 4.7uF. Then hit enter or click ‘OK’.
Congrats! You’ve just wired up your first schematic component. Press ctrl+s to save your work.
But what about the C? designator?! Don’t worry about it! One of the many benefits of KiCad is the ability to auto-annotate a schematic.
Click on the Annotate schematic components button.
Use the default settings and simply click on Annotate button to confirm.
KiCad confirming annotation
KiCad will ask you if you’re sure, simply press return or click ‘OK’ again.
Capacitor with correct value and designator! We are all set. Time to edit the PCB.
Before we start editing the PCB, here are the keyboard shortcuts in KiCad’s Pcbnew that we will be using frequently in this tutorial:
We’ve got our schematic, now let’s get the new 4.7uF cap placed on the board. From the schematic, click on the ‘Generate netlist’ button.
You’ll see the following window:
KiCad is powerful. And with this power, comes an overwhelming number of options. Lucky for us, we are just scrapping the surface so we don’t need to fiddle with any of these options. Simply press enter or click ‘Generate’ to confirm this screen. KiCad will ask you where you want to save the netlist as a *.net file with the default location being the project folder. Again, press enter or click ‘Save’ to confirm.
Return to the main project window and double click the *.kicad_pcb file.
Welcome to PCB editing. Of all the differences between EAGLE and KiCad it was the look within PCB layout that threw me off the most. Under the View menu you will find three other views: Default, OpenGL, and Cairo. I prefer OpenGL. Lets switch Canvas to OpenGL for now.
Your mouse wheel does what you expect: Zoom In/Out and Pan by Clicking.
I don’t like the layer colors! Ya, me either. To change the layer colors, on the right side menu use your mouse wheel to click on the green square next to B.Cu (bottom copper layer). I prefer the following layer colors:
Pressing ‘+’ and ‘-’ will switch between top and bottom copper layers. This is useful when you need to view a certain layer.
It’s all cosmetic but these layer colors make it easier for me to see what’s going on.
Be sure to poke around the Render tab (next to the Layer tab), namely the Values and References check boxes.
I find the Values and References extremely distracting when turned on so I leave them OFF. Many designers live and die by these values, so use as needed.
Aren’t we here to add a 4.7uF cap to the board? Where is it? It’s nowhere, sorry.
What’s going on? We failed to assign a footprint to the capacitor we added in the schematic. Remember, KiCad does not link schematic components to footprints the same way EAGLE does. We have to specifically connect a footprint to each schematic component that was added.
Navigate to back to schematic and click on the ‘Run CvPcb’ button to associate components and footprints:
If this is the first time you’ve run CvPcb you’ll get this warning:
Simply click through it.
Depending on how many libraries you have installed, this may take up to 30 seconds. We will make this better later in the tutorial but for now, be patient.
In the left column is all the footprint libraries that KiCad ships with. In the middle is the list of components in your schematic. On the right is any footprint that may work with the highlighted component in the middle. Your job is to double click on the footprint on the right that goes with the component in the middle.
To make life easier click on the ‘View selected footprint’ button.
Now you can preview the footprint as you click down the list in the right.
In Windows, I press and hold the Windows button and press the left arrow and release. This will lock the CvPcb window on one side. Then select and lock the Footprint Preview Window to the right. This allows us to flip through footprints in the left window while seeing the preview on the right.
Highlight C2 in the middle column. Then double click the Capacitors_SMD:C_0603 in the right column. C2 should now be assigned a footprint.
Close the CvPcb window and click ‘Save and Exit’. We need to re-export the Netlist. Remember how to do that? Click the ‘Generate netlist’ button again, press enter twice. Open to the PCB editor either from the schematic or from the project window.
Hey! It’s still missing! We changed things, so we need to import the netlist! Remember how? Click on the ‘Read netlist’ button and you should see this window:
Click ‘Read Current Netlist’ and ‘YES’ to confirm. You can also hit enter twice. You should see the new capacitor near the board.
This is a decoupling cap so let’s put it next to the 0.1uF cap that is already there. Start by hovering over the new cap and press ’m’ for move.
Left click to place the capacitor. Now press ’m’ over the 0.1uF cap in the way by moving it to the left.
Press ‘b’ to update the GND polygon pours.
We’ve got some traces to fix but this isn’t too bad. Hover over the bits of traces that you want to remove and press ‘Delete’. Let’s delete the trace and via that is under the capacitor’s +3v3 terminal. If your pointer is over multiple items (as shown in the image below with the cursor over both the trace and capacitor), KiCad will pop up a menu to clarify your selection. This is basically asking you to pick which one you want to operate on.
If you ever run into a problem press ‘Esc’ to return to default pointer mode. If you ever delete something wrong press ‘ctrl+z’.
Once you’ve removed most of the offending traces, you can begin routing by pressing ‘x’.
Single click on the pad that has the gray air-wire and drag it to the pad that it needs to connect to. Single click again to lock the wire in place. Press ‘b’ to update the polygons.
In the image below, KiCad is trying to route this trace in an odd way. If we place the trace here it will create an acute angle which is generally bad (read up on “acid traps”). We want the trace to be a T intersection. We need to change the grid.
Well that’s annoying!
Press ‘n’ to go to the next grid size. I needed to hit ‘n’ only once to go to the 0.25mil grid to get this nice intersection, you may need to get to a finer grid. You can also find this in the menu options under “Grid: 0.0635mm (2.5mils).”
Nice T intersection!
In the image below, I am routing the GND air wires. This is not really needed because the polygon pour connects the two pads but it does illustrate how good the ‘magnetic’ routing assistance is in KiCad. It’s very quick and easy to go from pad to pad.
We have two air-wires left. To get these we’ll need to place vias down to the bottom layer. Start by pressing ‘x’ and clicking on the start of the capacitor’s air wire for GND again.
Bring the trace out.
When you’ve reached open ground press ‘v’ to create a via. Single click to place the via and KiCad will automatically start routing on the bottom layer. Press ‘Esc’ to stop laying down traces; the polygon pour will take it from here. Pressing ‘Page Up’ will take you back to the top layer.
One air wire left!
To get this last air wire, you can try clicking on the GND pad of the 0.1uF cap but annoyingly KiCad won’t start routing?! Why?! It’s actually a good thing: the SDA trace is too close (overlapping actually) to the GND pad on the 0.1uF cap. By not letting you start routing KiCad is saying that trying to put a trace here would violate the DRC rules. What to do? Rip up the SCL and SDA lines to make some room.
Aha! Much better. Press ‘x’, click on the capacitor’s GND terminal, bring the trace out, and press ‘v’ to drop a via in this area. Hit escape to stop routing (let the polygon take care of it). Finally, press ‘Page Up’ to return to the top layer view.
Use the ‘Delete’ and ‘x’ buttons to re-route the SDA and SCL lines to finish up this board. Then press ‘b’ to update the polygons. The board should look similar to the image below.
Routed with no air wires!
Congrats! We have finished routing the footprints. Now let’s run the DRC to see if we’re legal.
Before we continue let’s go over the process for modifying or removing a component from a PCB layout. For example, let’s say that you wanted to remove an extra capacitor or resistor from a design. You would do all the regular steps:
The difference is a few import settings:
During the netlist import the default settings are to ‘Keep’ exchange footprint and to ‘Keep’ extra footprints.
Here, we need to change two things:
You may also want to ‘Delete’ unconnected tracks to clean up any left over tracks from the component you removed.
Click on the ladybug with the green check mark on it to open the Design Rule Check (DRC) window.
Let’s take a moment to talk trace width, trace spacing, and vias. In general, SparkFun designs boards with:
We go smaller than this on many designs but if you’re designing your first PCB, do not design it with 4mil traces and 8mil vias. You shouldn’t need to go that small on your first board.
Making PCBs is tricky and for each increment of tolerance you remove you increase the chances that the PCB (proto or not) will be fabricated with an error. And those errors can be hard to identify. We design with 10mil trace/space in order to insure and reduce the probability that we’ll see PCBs with errors on the production floor. There’s nothing worse than troubleshooting a faulty product and asking yourself: “I’ve tried every rework and soldering trick in the book, is it the PCB that’s bad?”
That stated, we are seeing many PCB fab houses charge low prices for 7mil trace/space and 12mil vias. If you’ve got a complex board with tight layout challenges, it’s better use the smaller trace/space and vias. Save yourself the layout time and rely on the PCB fab house to correctly fabricate your board.
We generally use the KiCad defaults of:
Press enter again to run the DRC with the default settings.
Aw shucks! What’s wrong with my board? The vias marked with red arrows are too close to the traces near by. The error message will show up in the window as an error indicating: “Via near track.” Fix them by ripping up (press ‘Delete’) any traces near the vias and re-route them (press ‘x’).
After adjusting the traces causing the issues, re-run the DRC. These three flags should disappear.
DRC markers have been cleared
Congrats! You’ve fixed those “Via near track” issues.
But wait, we are not done yet! There are still two DRC error arrows left with the error indicating: “Pad near pad”. KiCad is trying to tell us the pads on this solder jumper are too close together. SparkFun has used this footprint for years and is comfortable with the design so let’s change the Netclass clearance constraint.
Open the DRC rules from the Design Rules menu.
Here is where you can create specific rules for specific traces and classes of traces. The problem that we are running into is the Default Clearance is 0.079mil (0.2mm). If we decrease this to 7mil (0.01778mm), click ‘OK’, and re-run the DRC…
DRC errors resolved! Now reducing the DRC clearances in order to get your board to pass DRC is not an ideal solution. We want the pads on the solder jumper to be close enough to be easily jumpered with solder so increasing the distance between the pads on the footprint would be counterproductive. In general, you should set your DRC rules and stick to them.
One last note about DRC: Leaving airwires on your PCB is a sure fire way to generate coasters (bad, unusable PCBs).
From the DRC window there is a ‘List Unconnected’ button. This will show you the location of any unconnected traces (I had to rip up the SDA trace on the bottom right side of the PCB to show this error). It’s very important that you check for airwires before ordering your PCBs. As you progress through your layout, I recommend focusing on the ‘Unconnected’ count at the bottom of the screen (circled in pink). If you think you are done routing a board but still show a few unconnected wires that you can’t find, the DRC window will help you locate them.
Press ‘ctrl+s’ to save your work.
Well done. You’ve made it through design rule checking! Now it’s time to order boards.
We added a component to the schematic, we modified the PCB layout, and we checked for errors. Now we are confident and ready to have our boards made! Time to export the gerber files.
Gerber files are the ‘artwork’ or the layers that the PCB fabrication house will use to construct the board. We’ve got a great tutorial on the different layers of a PCB so be sure to read up if all this is new to you.
Click on the ‘Plot’ button next to the printer icon in the top bar to open the ‘Plot’ window.
In general, there are 8x layers you need to have a PCB fabricated:
In the Plot window with the Plot format set for Gerber, be sure these Layers are checked:
Additionally, click on ‘Generate Drill File’ button. You can use the defaults here as well. More on the PTH vs. NPTH check box in a minute. For now just click ‘Drill File’ or press enter to generate the drill file.
Click on ‘Close’ in the ‘Drill Files Generation’ window.
Click ‘Plot’ to generate the gerber files for the layers and then click ‘Close’.
This is the last chance to catch any errors before paying real money. Reviewing the gerber layers often shows you potential errors or problems before you send them off to fab.
Return to the main KiCad project window and open up GerbView by clicking on the button.
Once KiCad’s GerbView is open, click on File -> Load Gerber File. Select all the files shown and click Open.
Next, click File -> Load EXCELLON Drill File. Load your drill files by selecting the all the drill files shown and click ‘Open’. They should be in the same directory.
The layout looks very different but this is a good thing. You’ve been staring at your design for hours and it’s hard for your brain to see issues. I generally do not change the layer colors unless I have to. I want the gerber review to be jarring and different from my layout practices so that I’m more likely to catch issues.
From this view, turn off all the layers but the Top Copper (layer 5). Additionally from the Render menu, turn off the Gridand DCodes. This will make the review less cluttered.
Now step through the different layers by toggling them on and off. You’re looking for irregularities and things that look out of place. Here are some things I look for:
Now turn everything off and repeat for the bottom layers.
Did you catch it? There are a handful of things wrong with this example.
Leaving a silkscreen indicator off won’t break your board but it’s small defects like this that the gerber review is meant to catch.
Whoops! Bottom silkscreen for GND is missing!
Take a moment and return to the PCB layout window to edit the make these corrections.
Now, we need to deal with the two drill files.
When generating the drill file for this design two files where generated:
Non-plated through holes are holes on your PCB that do not have copper covering the vertical walls of the hole. This is sometimes required for advanced designs where thorough electrical isolation is needed. However, it is rare. While plated through holes (PTH) are common and cheap, NPTH requires an extra step in the PCB fabrication process and will often cost extra.
We don’t need NPTH for this design, so what happened? The ‘STAND-OFF’ footprint (i.e. used for the drill holes top of the board for mounting holes) was imported from the SparkFun Eagle library and KiCad seems to think it is a non-plated hole for some reason.
To correct this go back to the PCB layout, click on the Plotter, click ‘Generate Drill File’ and select the box that says ‘Merge PTH and NPTH holes into one file’. In a later section, we’ll go over how to edit the ‘STAND-OFF’ footprint to use a regular PTH hole.
Are you doing SMD reflow? Need to order a stencil to apply the solder paste to your board? Turn on F.Paste in the Plot window to generate the top paste layer.
This *.gtp file is sent to a stencil fabricator to create the stainless steel or mylar solder paste stencil. If you’re unfamiliar with stenciling solder paste we have a fabulous tutorial.
We use OSHStencils for our proto stencils. The top paste layer is not needed to fabricate a PCB.
If you’re happy with your layout, let’s order some PCBs! Every fab house understand and works with gerber files, so navigate to the directory on your computer where your KiCad project resides.
Select and zip the following 8x files:
You could zip all the files in the directory and send them off to your fab house but I don’t recommend it. There are a tremendous number of PCB layout software packages generating all sorts of different file names and formats. It’s often difficult to tell if *.cmp is a gerber file or something else. Does the customer care about the *.gtp file or is that just extra? It’s better to give the fab house only what you want fabricated.
The final step? Order your boards! The gerbers are the universal way to communicate with a PCB vendor. There are hundreds if not thousands of PCB vendors out there. Shop around!
In addition to your gerbers, you’ll need to specify via email or the PCB vendor’s website various elements of the PCB:
If you had a look at the soldermask on this PCB and wondered why it looked odd, you’re not alone. Let’s compare the PCB’s soldermask for KiCad (as shown in green) and Eagle (as shown in pink). You should notice two things:
In the image below, we can see the SMD Qwiic connector within Eagle. The default soldermask clearance is 0.1mm per side in Eagle.
In KiCad’s Pcbnew, open the ZOPT220x Breakout and click on Dimensions -> Pads Mask Clearance. KiCad’s solder mask clearance has a default of 0.2mm per side. We recommend you change this value to 0.1mm. Most fab houses will use 0.1mm as their default as well. You will then need to re-export your gerbers and load them back into GerbView.
Making the clearance smaller than 0.1mm will cause difficulties for the fab house to get the registration correct.
This section will show you how to create your own local custom footprints so that you can use them and connect them to schematic components using CvPcb. We’re going to assume you’ve already been through the previous sections of this tutorial; you should have KiCad downloaded and installed.
Open KiCad’s project manager and then click on the PCB footprint editor button.
You may get the warning. That’s ok, just click through it. This is KiCad’s way of telling you it’s going to create the default table of libraries that link to KiCad’s extensive GitHub repos.
Click Preferences -> Footprint Libraries Manager. This will open the list of all the footprint libraries now accessible to you.
This is a tremendous list of libraries! Click ‘OK’ to close the manager.
Let’s poke around these libraries. Click on ‘Load footprint from library’ button and then ‘Select by Browser’. This is a handy tool for perusing the available footprints.
Navigate to the LEDs -> LED_CREE-XHP50_12V footprint. Here is an example footprint in LEDs library. Double click on this footprint to open it up in the editor.
Note the title bar of the editor window has changed. The active library is now LEDs and it is read only. Obviously KiCad wants to control their libraries; not just anyone can save to their repos. If we want to edit this footprint we need our own local copy.
Let’s create a local directory to keep all our local footprints. For this tutorial, please create a local folder called ‘C:\KiCadLibs\’ (or your platform’s equivalent).
Now click on File->Save Footprint in New Library.
I recommend using different directory names for different sets of footprints (resistors, connectors, LEDs, etc). Select the ‘KiCadLibs’ folder that was created and then type ‘\LEDs’. KiCad will create the new ‘LEDs.pretty’ directory with a file ‘C:\KiCadLibs\LEDs.pretty\LED_CREE-XHP50_12V.kicad_mod’. And we’re off to the races. Except, not quite yet.
Notice the title bar in the Footprint Editor still states the active library is LEDs and is read only. We need to switch the active directory to our local folder. I’m going to head you off: File->Set Active Directory doesn’t work as it only gives you the list of libraries that KiCad ships with. Oh KiCad!
Before we can set our new footprint directory as active, we need to make KiCad aware of it. Re-open the Preferences -> Footprint Libraries Manager.
Click on the ‘Append with Wizard’ button. You’ll be asked to locate the directory you want to add. In this case, we want to add the ‘Files on my computer’. Click on the ‘Next >’ button, select the directory we created (i.e. ‘C:\KiCadLibs\LEDs.pretty’. Click on ‘Next >’ a few times. When prompted ‘Where do you wish the new libraries to be added’, select ‘To Global library configuration (visible to all projects)’ and click ‘Finish’.
KiCad may throw an error because the ‘LEDs’ nickname is used twice. I renamed mine to ‘LEDs-Custom’ then click on ‘OK’ to close out the Footprint Libraries Manager.
If you inspect the Footprint Editor tool bar again, you’ll see the LEDs library is still active and read only. Now we can click on ‘File->Set Active Library’. Here is where KiCad shines – the Filter works well. Type LED and select the LEDs-Custom library.
At last! We have an active local library. Now when you click ‘Save footprint in local library’ or press ‘ctrl+s’ KiCad will prompt you with a Save Footprint window with Name (annoyingly every time). Press enter and your modifications will be saved.
Now you can explore creating and editing footprints using the Footprint Editor.
After you’ve created your first footprint or two be sure to read KiCad’s KiCad Library Conventions (KLC). It’s a well documented system for creating community share-able footprints. Left to our own devices we will all create things a little differently; the KLC tries to get us all on the same page and SparkFun follows it.
In the future, if you’re creating a lot of footprints consider using git repo to manage the changes. At SparkFun, we use the following structure:
By using a git repo, SparkFun engineers and our users can contribute schematic components and footprints.
When opening CvPcb to assign footprints to the schematic components, it can take a very long time to load. This is because KiCad is pinging all the KiCad github repos and pulling down 93 libraries. To make this faster, we recommend removing the libraries that are either deprecated or libraries that you will never use.
It’s quick and easy to remove a library: select a row in the Footprint Libraries Manager and click the ‘Remove Library’ button. If something goes wrong, don’t panic! Simply click ‘Cancel’ in the manager window and the library manager will close without saving changes. If things go really wrong, you can always delete the ‘fp-lib-table’ file and restart KiCad. This will cause it to create the footprint table with the KiCad defaults.
The footprint libraries table file (on Windows 10) is located in your AppData. It should look similar to: ‘C:\Users\Nathan\AppData\Roaming\kicad\fp-lib-table’ .
The contents of ‘fp-lib-table’
Removing the deprecated libraries brings the default count down to 75 and CvPcb still takes an annoyingly long time to load. This is where you’ll have to make some tough decisions. Do you plan to ever need the ‘Shielding-Cabinets’ library? Perhaps. Perhaps not. If I ever do need an RF shield for a design, it will most likely be a custom part or a part that is notin the library. So that one gets the toss.
SparkFun is taking a blended approach. We’re becoming very familiar with the default KiCad libraries and using their footprints wherever it makes sense. When we find or use a package we like, we copy it over to the SparkFun-KiCad-Libraries GitHub repo. At the same time, we’re continuing to leverage all our custom Eagle footprints that we’ve been using and creating for over a decade. We know and trust these footprints. I have had many PCBs ruined because I trusted someone else’s footprint so I tend to be very paranoid. Use the community where you can but be very rigorous about checking them for correctness.
If you’re needing a generic 2×5 pin male header, check the KiCad libraries. It should work fine. However, if you’re using a more eclectic part, you may be better off creating the footprint from scratch. Even if the KiCad libraries contain the part, you’ll want to check it against the datasheet very closely and do a one to one test print.
If you’re familiar with Eagle, it can be scary to think all the time spent creating footprints will be lost when switching to KiCad. Don’t fear! KiCad inherently reads Eagle footprints! Yep, it’s built right in. Now don’t get too excited. KiCad can’t read your Eagle schematic components but we have a solution for that in a later section.
The approach we are taking at SparkFun is to link to a local copy of all our classic Eagle Librarie Anytime we need one of the Eagle footprints, we copy and paste it into a modern KiCad library. We don’t have to re-create the footprint but by moving it over to a KiCad library. We are able to then edit the footprint as needed. Furthermore, any new footprints are created from scratch and saved to the appropriate SparkFun KiCad library.
You should have already opened the PCB Footprint Editor at least once by now. This will have created a ‘fp-lib-table’ file that we will be editing shortly. Now to get started, be sure that KiCad is closed.
Download the SparkFun Eagle Libraries from GitHub.
Unzip them into a local directory of your choice. I store our Eagle libraries in a DropBox folder so both my desktop and laptop can access the same set of files.
You could use the Footprint Libraries Manager located in the footprint editor but adding or removing many libraries becomes tedious; it’s easier to edit the table file directly.
The contents of fp-lib-table
The ‘fp-lib-table’ tells KiCad where to find all the various libraries and what types of libraries they are (KiCad, github, EAGLE, etc).
We are going to edit this file to add in the SparkFun libraries as well as remove the deprecated libraries and libraries that SparkFun doesn’t use.
Here are the files of importance:
Download the ‘combined fp-lib-table’ to a local folder. Rename it to ‘fp-lib-table’. Now move the file to where KiCad expects it. The footprint libraries table file (on Windows 10) is located in the AppData folder similar to: ‘C:\Users\Nathan\AppData\Roaming\kicad\fp-lib-table’. You’ll want to overwrite the file that is there.
Once the file is in place, re-open KiCad, open the PCB footprint editor, and finally the Footprint Libraries Manager. You should see a long list of libraries including the new SparkFun libraries.
The last step is to tell KiCad the local path to the SparkFun libraries. Currently it’s a variable called SFE_LOCAL. We need to assign this to something. Close the Library Manager window, click on Preferences -> Configuration Paths. Click the ‘Add’ button. Edit the Name and Path fields.
In the image below, you can see I’ve set the ‘SFE_LOCAL’ variable to a local path of ‘C:\Users\nathan.seidle\Dropbox\Projects\SparkFun-Eagle-Libraries\’. Set this variable to wherever you locally stored the SparkFun Eagle Libraries.
Congratulations! You can now see, use, and copy all the SparkFun Eagle libraries.
Once you’ve learned to create your own schematic parts and custom footprints, you become unlimited by what technologies you can play with. Let’s get started!
From the main project window start the Schematic library editor.
This process is similar to how we started a custom footprint library. First, let’s find a schematic symbol we want to start our custom library with. The photocell is just as common as it gets. Let’s pull in the ‘R_PHOTO’ schematic component from the device library and use it to start our new custom schematic component library.
Start by clicking on the ‘Selecting working library’ (i.e. book) icon located in the upper left corner. Then select ‘device’to set the working library.
Click on the ‘Load component to edit from current library’ button and type r_photo in the filter to quickly locate the photoresistor component. When located, click ‘OK’.
Click on ‘Save current component to new library’ button
I recommend you store this *.lib file in the same ‘C:\KiCadLibs\’ directory we stored the footprint library within. I called my lib file ‘CustomComponents.lib’ so that I know these are mine.
Once you click ‘Save’, a warning will pop up. This is just KiCad’s polite way of letting you know that you can’t access your library until you link to it. So let’s do that.
Click on Preferences -> Component Libraries to view the current set of libraries. In the image below, we can see the stock schematic component libraries that ship with KiCad. Next to the ‘Component library files,’ click ‘Add’.
Navigate to your ‘C:\KiCadLibs’ directory and then open ‘CustomComponents.lib’. It should now appear at the bottom of the Component library files list. Click ‘OK’ to return to the library editor.
Again, click on the ‘Select working library’ button but this time either scroll to your custom list or type ‘Custom’ to find the ‘CustomComponents’ library. Click ‘OK’.
Then click on ‘Load component to edit from the current library’ button and we should see only the photoresistor schematic component. Double click on R_PHOTO to begin editing it.
Now at this point, we can add new symbols from scratch to our library and we can also copy from one library to another.
KiCad is always changing and they’ve made leaps and bounds improvements but copying a schematic component from one library to another is still a bit wild.
For example, let’s copy the CP2104 from the silabs library into our custom library. Start by setting the active library to the one that contains the part you want to copy by clicking on the ‘Select working library’ button. In our example, we want to set silabs as the active library.
Load the CP2104 component by clicking on the ‘Load component to edit from the current library’ button.
Now set the active library to the library we want to copy the CP2104 into. For this example, that means that we need to click on the ‘Select working library’ button and set the active library to ‘CustomComponents’.
Click on the ‘Update current component in current library’ button to save the component in CustomComponents.lib. The ‘Save current library to disk’ button will become enabled and you can save this component to your custom library.
To verify it’s now in the library click on the ‘Load component to edit from the current library’ button. You should see your new shiny CP2104 in the list.
Bad CP2104! Bad component.
To remove a component, be sure you’ve set your custom library as the active one. Let’s try removing the component that we just added in our custom library CustomComponents.lib. If you have not already, click on ‘Select working library’ to set the active library to CustomComponents. Click on the ‘Delete component in current library’ (i.e. the trash can) button. You’ll be prompted for which component you want to remove. Select CP2104 from the list.
Click ‘OK’ and then ‘Yes’ to delete the component from the library. Click the ‘Current library to disk’ button and ‘Yes’ to save.
Shout out to Joan_Sparky! He is the best! (No relation)
Be sure to check KiCad’s KiCad Library Convention once you get comfortable creating components. These conventions take into account a heap of industry specialized knowledge that we can all benefit from.
Congratulations! That was a big tutorial and you made it through.
For more information related to KiCad, check out the resources below:
Now that you’ve learned how to modify schematics, PCB layouts, and libraries, it’s time to try out your skills on your own custom project. We recommend using the ZOPT220x UV Sensor Breakout KiCad files as the starting point for your next project. From this example project, you can delete or add devices as you need rather than starting from a blank canvas.
Also, check out SparkFun’s Enginursday blog post about KiCad.
If you are an EAGLE guru starting to get your feet wet with KiCad, be sure to checkout Lachlan’s Eagle to KiCad converter for converting your Eagle PCB layouts to KiCad. It’s not perfect but Lachlan has done a tremendous amount of groundwork.
Thanks for reading and if you have any comments or questions please ask them in the comments section.
We live in an analog world. There are an infinite amount of colors to paint an object (even if the difference is indiscernible to our eye), there are an infinite number of tones we can hear, and there are an infinite number of smells we can smell. The common theme among all of these analog signals is their infinite possibilities.
Digital signals and objects deal in the realm of the discrete or finite, meaning there is a limited set of values they can be. That could mean just two total possible values, 255, 4,294,967,296, or anything as long as it’s not ∞ (infinity).
Real-world objects can display data, gather inputs by either analog or digital means. (From left to right): Clocks, multimeters, and joysticks can all take either form (analog above, digital below).
Working with electronics means dealing with both analog and digital signals, inputs and outputs. Our electronics projects have to interact with the real, analog world in some way, but most of our microprocessors, computers, and logic units are purely digital components. These two types of signals are like different electronic languages; some electronics components are bi-lingual, others can only understand and speak one of the two.
In this tutorial, we’ll cover the basics of both digital and analog signals, including examples of each. We’ll also talk about analog and digital circuits, and components.
The concepts of analog and digital stand on their own, and don’t require a lot of previous electronics knowledge. That said, if you haven’t already, you should peek through some of these tutorials:
Before going too much further, we should talk a bit about what a signal actually is, electronic signals specifically (as opposed to traffic signals, albums by the ultimate power-trio, or a general means for communication). The signals we’re talking about are time-varying “quantities” which convey some sort of information. In electrical engineering the quantitythat’s time-varying is usually voltage (if not that, then usually current). So when we talk about signals, just think of them as a voltage that’s changing over time.
Signals are passed between devices in order to send and receive information, which might be video, audio, or some sort of encoded data. Usually the signals are transmitted through wires, but they could also pass through the air via radio frequency (RF) waves. Audio signals, for example might be transferred between your computer’s audio card and speakers, while data signals might be passed through the air between a tablet and a WiFi router.
Because a signal varies over time, it’s helpful to plot it on a graph where time is plotted on the horizontal, x-axis, and voltage on the vertical, y-axis. Looking at a graph of a signal is usually the easiest way to identify if it’s analog or digital; a time-versus-voltage graph of an analog signal should be smooth and continuous.
While these signals may be limited to a range of maximum and minimum values, there are still an infinite number of possible values within that range. For example, the analog voltage coming out of your wall socket might be clamped between -120V and +120V, but, as you increase the resolution more and more, you discover an infinite number of values that the signal can actually be (like 64.4V, 64.42V, 64.424V, and infinite, increasingly precise values).
Video and audio transmissions are often transferred or recorded using analog signals. The composite video coming out of an old RCA jack, for example, is a coded analog signal usually ranging between 0 and 1.073V. Tiny changes in the signal have a huge effect on the color or location of the video.
An analog signal representing one line of composite video data.
Pure audio signals are also analog. The signal that comes out of a microphone is full of analog frequencies and harmonics, which combine to make beautiful music.
Digital signals must have a finite set of possible values. The number of values in the set can be anywhere between two and a-very-large-number-that’s-not-infinity. Most commonly digital signals will be one of two values – like either 0V or 5V. Timing graphs of these signals look like square waves.
Or a digital signal might be a discrete representation of an analog waveform. Viewed from afar, the wave function below may seem smooth and analog, but when you look closely there are tiny discrete steps as the signal tries to approximate values:
That’s the big difference between analog and digital waves. Analog waves are smooth and continuous, digital waves are stepping, square, and discrete.
Serial peripheral interface (SPI) uses many digital signals to transmit data between devices.
Most of the fundamental electronic components – resistors, capacitors, inductors, diodes, transistors, and operational amplifiers – are all inherently analog. Circuits built with a combination of solely these components are usually analog.
Analog circuits are usually complex combinations of op amps, resistors, caps, and other foundational electronic components. This is an example of a class B analog audio amplifier.
Analog circuits can be very elegant designs with many components, or they can be very simple, like two resistors combining to make a voltage divider. In general, though, analog circuits are much more difficult to design than those which accomplish the same task digitally. It takes a special kind of analog circuit wizard to design an analog radio receiver, or an analog battery charger; digital components exist to make those designs much simpler.
Analog circuits are usually much more susceptible to noise (small, undesired variations in voltage). Small changes in the voltage level of an analog signal may produce significant errors when being processed.
Digital circuits operate using digital, discrete signals. These circuits are usually made of a combination of transistors and logic gates and, at higher levels, microcontrollers or other computing chips. Most processors, whether they’re big beefy processors in your computer, or tiny little microcontrollers, operate in the digital realm.
Digital circuits make use of components like logic gates, or more complicated digital ICs (usually represented by rectangles with labeled pins extending from them).
Digital circuits usually use a binary scheme for digital signaling. These systems assign two different voltages as two different logic levels – a high voltage (usually 5V, 3.3V, or 1.8V) represents one value and a low voltage (usually 0V) represents the other.
Although digital circuits are generally easier to design, they do tend to be a bit more expensive than an equally tasked analog circuit.
It’s not rare to see a mixture of analog and digital components in a circuit. Although microcontrollers are usually digital beasts, they often have internal circuitry which enables them to interface with analog circuitry (analog-to-digital converters, pulse-width modulation, and digital-to-analog converters. An analog-to-digital converter (ADC) allows a microcontroller to connect to an analog sensor (like photocells or temperature sensors), to read in an analog voltage. The less common digital-to-analog converter allows a microcontroller to produce analog voltages, which is handy when it needs to make sound.
Now that you know the difference between analog and digital signals, we’d suggest checking out the Analog to Digital Conversion tutorial. Working with microcontrollers, or really any logic-based electronics, means working in the digital realm most of the time. If you want to sense light, temperature, or interface a microcontroller with a variety of other analog sensors, you’ll need to know how to convert the analog voltage they produce into a digital value.
Also, consider reading our Pulse Width Modulation (PWM) tutorial. PWM is a trick microcontrollers can use to make a digital signal appear to be analog.
Here are some other subjects which deal heavily with digital interfaces:
Or, if you’d like to delve further into the analog realm, consider checking out these tutorials:
Serial Peripheral Interface (SPI) is an interface bus commonly used to send data between microcontrollers and small peripherals such as shift registers, sensors, and SD cards. It uses separate clock and data lines, along with a select line to choose the device you wish to talk to.
Stuff that would be helpful to know before reading this tutorial:
A common serial port, the kind with TX and RX lines, is called “asynchronous” (not synchronous) because there is no control over when data is sent or any guarantee that both sides are running at precisely the same rate. Since computers normally rely on everything being synchronized to a single “clock” (the main crystal attached to a computer that drives everything), this can be a problem when two systems with slightly different clocks try to communicate with each other.
To work around this problem, asynchronous serial connections add extra start and stop bits to each byte help the receiver sync up to data as it arrives. Both sides must also agree on the transmission speed (such as 9600 bits per second) in advance. Slight differences in the transmission rate aren’t a problem because the receiver re-syncs at the start of each byte.
(By the way, if you noticed that “11001010” does not equal 0x53 in the above diagram, kudos to your attention to detail. Serial protocols will often send the least significant bits first, so the smallest bit is on the far left. The lower nybble is actually 0011 = 0x3, and the upper nybble is 0101 = 0x5.)
Asynchronous serial works just fine, but has a lot of overhead in both the extra start and stop bits sent with every byte, and the complex hardware required to send and receive data. And as you’ve probably noticed in your own projects, if both sides aren’t set to the same speed, the received data will be garbage. This is because the receiver is sampling the bits at very specific times (the arrows in the above diagram). If the receiver is looking at the wrong times, it will see the wrong bits.
SPI works in a slightly different manner. It’s a “synchronous” data bus, which means that it uses separate lines for data and a “clock” that keeps both sides in perfect sync. The clock is an oscillating signal that tells the receiver exactly when to sample the bits on the data line. This could be the rising (low to high) or falling (high to low) edge of the clock signal; the datasheet will specify which one to use. When the receiver detects that edge, it will immediately look at the data line to read the next bit (see the arrows in the below diagram). Because the clock is sent along with the data, specifying the speed isn’t important, although devices will have a top speed at which they can operate (We’ll discuss choosing the proper clock edge and speed in a bit).
One reason that SPI is so popular is that the receiving hardware can be a simple shift register. This is a much simpler (and cheaper!) piece of hardware than the full-up UART (Universal Asynchronous Receiver / Transmitter) that asynchronous serial requires.
You might be thinking to yourself, self, that sounds great for one-way communications, but how do you send data back in the opposite direction? Here’s where things get slightly more complicated.
In SPI, only one side generates the clock signal (usually called CLK or SCK for Serial ClocK). The side that generates the clock is called the “master”, and the other side is called the “slave”. There is always only one master (which is almost always your microcontroller), but there can be multiple slaves (more on this in a bit).
When data is sent from the master to a slave, it’s sent on a data line called MOSI, for “Master Out / Slave In”. If the slave needs to send a response back to the master, the master will continue to generate a prearranged number of clock cycles, and the slave will put the data onto a third data line called MISO, for “Master In / Slave Out”.
Notice we said “prearranged” in the above description. Because the master always generates the clock signal, it must know in advance when a slave needs to return data and how much data will be returned. This is very different than asynchronous serial, where random amounts of data can be sent in either direction at any time. In practice this isn’t a problem, as SPI is generally used to talk to sensors that have a very specific command structure. For example, if you send the command for “read data” to a device, you know that the device will always send you, for example, two bytes in return. (In cases where you might want to return a variable amount of data, you could always return one or two bytes specifying the length of the data and then have the master retrieve the full amount.)
Note that SPI is “full duplex” (has separate send and receive lines), and, thus, in certain situations, you can transmit and receive data at the same time (for example, requesting a new sensor reading while retrieving the data from the previous one). Your device’s datasheet will tell you if this is possible.
There’s one last line you should be aware of, called SS for Slave Select. This tells the slave that it should wake up and receive / send data and is also used when multiple slaves are present to select the one you’d like to talk to.
The SS line is normally held high, which disconnects the slave from the SPI bus. (This type of logic is known as “active low,” and you’ll often see used it for enable and reset lines.) Just before data is sent to the slave, the line is brought low, which activates the slave. When you’re done using the slave, the line is made high again. In a shift register, this corresponds to the “latch” input, which transfers the received data to the output lines.
There are two ways of connecting multiple slaves to an SPI bus:
Note that, for this layout, data overflows from one slave to the next, so to send data to any one slave, you’ll need to transmit enough data to reach all of them. Also, keep in mind that the first piece of data you transmit will end up in the last slave.
This type of layout is typically used in output-only situations, such as driving LEDs where you don’t need to receive any data back. In these cases you can leave the master’s MISO line disconnected. However, if data does need to be returned to the master, you can do this by closing the daisy-chain loop (blue wire in the above diagram). Note that if you do this, the return data from slave 1 will need to pass through all the slaves before getting back to the master, so be sure to send enough receive commands to get the data you need.
Many microcontrollers have built-in SPI peripherals that handle all the details of sending and receiving data, and can do so at very high speeds. The SPI protocol is also simple enough that you (yes, you!) can write your own routines to manipulate the I/O lines in the proper sequence to transfer data. (A good example is on the Wikipedia SPI page.)
If you’re using an Arduino, there are two ways you can communicate with SPI devices:
You will need to select some options when setting up your interface. These options must match those of the device you’re talking to; check the device’s datasheet to see what it requires.
Check out the Wikipedia page on SPI, which includes lots of good information on SPI and other synchronous interfaces.
A number of SparkFun products have SPI interfaces. For example, the Bar Graph Breakout kit has an easy-to-use SPI interface that you can use to turn any of 30 LEDs on or off.
Other communication options:
Now that you’re a pro on SPI, here are some other tutorials to practice your new skills:
Why do we care about power? Power is the measurement of energy transfer over time, and energy costs money. Batteries aren’t free, and neither is that stuff coming out of your electrical outlet. So, power measures how fast the pennies are draining out of your wallet!
Also, energy is…energy. It comes in many, potentially harmful, forms – heat, radiation, sound, nuclear, etc. – ,and more power means more energy. So, it’s important to have an idea of what kind of power you’re working with when playing with electronics. Fortunately, in playing with Arduinos, lighting up LEDs, and spinning small motors, losing track of how much power you’re using only means smoking a resistor or melting an IC. Nevertheless, Uncle Ben’s advice doesn’t just apply to superheros.
Power is one of the more fundamental concepts in electronics. But before learning about power, there might be some other tutorials you should read first. If you’re not familiar with some these topics, consider checking out those tutorials first:
In general physics terms, power is defined as the rate at which energy is transferred (or transformed).
So, first, what is energy and how is it transferred? It’s hard to state simply, but energy is basically the ability ofsomething to move something else. There are many forms of energy: mechanical, electrical, chemical, electromagnetic, thermal, and many others.
Energy can never be created or destroyed, only transferred to another form. A lot of what we’re doing in electronics is converting different forms of energy to and from electric energy. Lighting LEDs turns electric energy into electromagnetic energy. Spinning motors turns electric energy into mechanical energy. Buzzing buzzers makes sound energy. Powering a circuit off a 9V alkaline battery turn chemical energy into electrical energy. All of these are forms ofenergy transfers.
|ENERGY TYPE CONVERTED||CONVERTED BY|
Example electric components, which transfer electric energy to another form.
Electric energy in particular, begins as electric potential energy – what we lovingly refer to as voltage. When electrons flow through that potential energy, it turns into electric energy. In most useful circuits, that electric energy transforms into some other form of energy. Electric power is measured by combining both how much electric energy is transferred, and how fast that transfer happens.
Each component in a circuit either consumes or produces electric energy. A consumer transforms electric energy into another form. For example, when an LED lights up, electric energy is transformed into electromagnetic. In this case, the lightbulb consumes power. Electric power is produced when energy is transferred to electric from some other form. A battery supplying power to a circuit is an example of a power producer.
Energy is measured in terms of joules (J). Since power is a measure of energy over a set amount of time, we can measure it in joules per second. The SI unit for joules per second is the watt abbreviated as W.
It’s very common to see “watts” preceded by one of the standard SI prefixes: microwatts (µW), miliwatt (mW), kilowatt (kW), megawatt (MW), and gigawatts (GW), are all common depending on the situation.
|PREFIX NAME||PREFIX ABBREVIATION||WEIGHT|
Microcontrollers, like the Arduino will usually operate in the the µW or mW range. Laptop and desktop computers operate in the standard watt power range. Energy consumption of a house is usually in the kilowatt range. Large stadiums might operate at the megawatt scale. And gigawatts come into play for large-scale power stations and time machines.
Electric power is the rate at which energy is transferred. It’s measured in terms of joules per second (J/s) – a watt (W). Given the few basic electricity terms we know, how could we calculate power in a circuit? Well, we’ve got a very standard measurement involving potential energy – volts (V) – which are defined in terms of joules per unit of charge (coulomb) (J/C). Current, another of our favorite electricity terms, measures charge flow over time in terms of the ampere (A) – coulombs per second (C/s). Put the two together and what do we get?! Power!
To calculate the power of any particular component in a circuit, multiply the voltage drop across it by the current running through it.
Below is a simple (though not all that functional) circuit: a 9V battery connected across a 10Ω resistor.
How do we calculate the power across the resistor? First we have to find the current running through it. Easy enough…Ohm’s law!
Alright, 900mA (0.9A) running through the resistor, and 9V across it. What kind of power is being applied to the resistor then?
A resistor transforms electric energy into heat. So this circuit transforms 8.1 joules of electric energy to heat every second.
When it comes to calculating power in a purely resistive circuit, knowing two of three values (voltage, current, and/or resistance) is all you really need.
By plugging Ohm’s law (V=IR or I=V/R) into our traditional power equation we can create two new equations. The first, purely in terms of voltage and resistance:
So, in our previous example, 9V2/10Ω (V2/R) is 8.1W, and we never have to calculate the current running through the resistor.
A second power equation can be formed solely in terms of current and resistance:
Why do we care about the power dropped on a resistor? Or any other component for that matter. Remember that power is the transfer of energy from one type to another. When that electrical energy running from the power source hits the resistor, the energy transforms into heat. Possibly more heat than the resistor can handle. Which leads us to…power ratings.
All electronic components transfer energy from one type to another. Some energy transfers are desired: LEDs emitting light, motors spinning, batteries charging. Other energy transfers are undesirable, but also unavoidable. These unwanted energy transfers are power losses, which usually show up in the form of heat. Too much power loss – too much heat on a component – can become very undesirable.
Even when energy transfers are the main goal of a component, there’ll still be losses to other forms of energy. LEDs and motors, for example, will still produce heat as a byproduct of their other energy transfers.
Most components have a rating for maximum power they can dissipate, and it’s important to keep them operating under that value. This’ll help you avoid what we lovingly refer to as “letting the magic smoke out”.
Resistors are some of the more notorious culprits of power loss. When you drop some voltage across a resistor, you’re also going to induce current flow across it. More voltage, means more current, means more power.
Remember back to our first power-calculation example, where we found that if 9V were dropped across a 10Ω resistor, that resistor would dissipate 8.1W. 8.1 is a lot of watts for most resistors. Most resistors are rated for anywhere from ⅛W (0.125W) to ½W (0.5W). If you drop 8W across a standard ½W resistor, ready a fire extinguisher.
If you’ve seen resistors before, you’ve probably seen these. Top is a ½W resistor and below that a ¼W. These aren’t built to dissipate very much power.
There are resistors built to handle large power drops. These are specifically called out as power resistors.
These large resistors are built to dissipate lots of power. From left to right: two 3W 22kΩ resistors, two 5W 0.1Ω resistors, and 25W 3Ω and 2Ω resistors.
If you ever find yourself picking out a resistor value. Keep it’s power rating in mind as well. And, unless your goal is to heat something up (heating elements are basically really high-power resistors), try to minimize power loss in a resistor.
Resistor power ratings can come into play when you’re trying to decide on a value for an LED current-limiting resistor. Say, for example, you want to light up a 10mm super-bright red LED at maximum brightness, using a 9V battery.
That LED has a maximum forward current of 80mA, and a forward voltage of about 2.2V. So to deliver 80mA to the LED, you’d need an 85Ω resistor to do so.
6.8V dropped on the resistor, and 80mA running through it means 0.544W (6.8V*0.08A) of power lost on it. A half-watt resistor isn’t going to like that very much! It probably won’t melt, but it’ll get hot. Play it safe and move up to a 1W resistor (or save power and use a dedicated LED driver).
Resistors certainly aren’t the only components where maximum power ratings must be considered. Any component with a resistive property to it is going to produce thermal power losses. Working with components that are commonly subjected to high power – voltage regulators, diodes, amplifiers, and motor drivers, for example – means paying extra special attention to power loss and thermal stress.
Now that you’re familiar with the concept of electric power, check out some of these related tutorials!
Earlier this year, Nathan Seidle, founder of SparkFun, created the Crowdsourcing Algorithms Challenge (aka, the Speed Bag Challenge). After numerous fantastic entries, one was chosen. The winner, Barry Hannigan, was asked to write up his process involved in solving this problem. This article is Barry Hannigan’s winning approach to solving real-world problems, even when the problem is not tangibly in front of you.
You can view Barry’s code by clicking the link below.
As the winner of Nate’s Speed Bag Challenge, I had the wonderful opportunity to meet with Nate at SparkFun’s headquarters in Boulder, CO. During our discussions, we thought it would be a good idea to create a tutorial describing how to go about solving a complex problem in an extremely short amount of time. While I’ll discuss specifics to this project, my hope is that you’ll be able to apply the thought process to your future projects—big or small.
In full-fledged software projects, from an Engineer’s perspective, you have four major phases:
Let’s face it; the design and coding is what everyone sees as interesting, where their creative juices can flow and the majority of the fun can be had. Naturally, there is the tendency to fixate on a certain aspect of the problem being solved and jump right in designing and coding. However, I will argue that the first and last phase can be the most important in any successful project, be it large or small. If you doubt that, consider this: my solution to the speed bag problem was designed wickedly fast, and I didn’t have a bag to test it on. But, with the right fixes applied in the end, the functionality was tested to verify that it produced the correct results. Conversely, a beautiful design and elegant implementation that doesn’t produce the required functionality will surely be considered a failure.
I didn’t mention prototype as a phase, because depending on the project it can happen in different phases or multiple phases. For instance, if the problem isn’t fully understood, a prototype can help figure out the requirements, or it can provide a proof of concept, or it can verify the use of a new technology. While important, prototyping is really an activity in one or more phases.
Getting back to the Speed Bag Challenge, in this particular case, even though it is a very small project, I suggest that you spend a little time in each of the four areas, or you will have a high probability of missing something important. To get a full understanding of what’s required, let’s survey everything we had available as inputs. The web post for the challenge listed five explicit requirements, which you can find here. Next, there was a link to Nate’s Github repository that had information on the recorded data format and a very brief explanation of how the speed bag device would work.
In this case, I would categorize what Nate did with the first speed bag counter implementation as a prototype to help reveal additional requirements. From Nate’s write-up on how he built the system, we know it used an accelerometer attached to the base of a speed bag and that the vibration data samples about every 2ms are to be used to count punches. We also now know that applying a polynomial smoothing function and looking for peaks above a threshold doesn’t accurately detect punches.
While trying not to be too formal for a small project, I kept these objectives (requirements) in mind while working the problem:
As it goes in all projects, now that you know what should be done, the realization that there isn’t enough time sets in. Since I didn’t have the real hardware and needed to be able to visually see the output of my algorithm, I started working it out quickly in Java on my PC. I built in a way to plot the results of the waveforms on my screen. I’ve been using NetBeans for years to do Java development, so I started a new speed bag project. I always use JFreeChart library to plot data, so I added it to my project. Netbeans has a really good IDE and built-in GUI designer. All I had to do was create a GUI layout with a blank panel where I want the JFreeChart to display and then, at run time, create the JFreeChart object and add it to the panel. All the oscilloscope diagrams in this article were created by the JFreeChart display. Here is an image from my quick and dirty oscilloscope GUI design page.
This algorithm was needed in a hurry, so my first pass is to be very object oriented and use every shortcut afforded me by using Java. Then, I’ll make it more C like in nature as I nail down the algorithm sequence. I jumped right in and plotted the X, Y and Z wave forms as they came from the recorded results. Once I got a look at the raw data, I decided to remove any biases first (i.e., gravity) and then sum the square of each waveform and take the square root. I added some smoothing by way of averaging a small number of values and employed a minimum time between threshold crossings to help filter out spikes. All in all, this seemed to make the data even worse on the plot. I decided to throw away X and Y, since I didn’t know in what orientation it was mounted and if it would be mounted the same on different speed bag platforms anyway. To my horror, even with just the Z axis, it still just looked like a mess of noise! I’m seeing peaks in the data way too close together. Only my minimum time between thresholds gate is helping make some sense of the punch count, but there really isn’t anything concrete in the data. Something’s not adding up. What am I missing?
Below is an image of the runF1 waveform. The blue signal is the filtered z axis, and the red line is a threshold for counting punches. As I mentioned, if it weren’t for my 250ms minimum between punch detections, my counter would be going crazy. Notice the way I have introduced two 5 millisecond delays in my
runF1() processing so thresholding would be a little better if the red line were moved to the right by 10 milliseconds. I’ll talk more about aligning signals later in this article, but you can see in this image how time aligning signals is crucial for getting accurate results.
The blue signal is the filtered z axis, and the red line is a threshold for counting punches.
If you look at the virtual oscilloscope output, you can see that between millisecond 25,000 and 26,000, which is 1 second in time, there are around nine distinct acceleration events. No way Nate is throwing nine punches in a second. Exactly how many punches should I expect to see per second? Back to the drawing board. I need another approach. Remember humility is your friend; if you just rush in on your high horse you usually will be knocked off it in a hurry.
Typically the requirements are drafted in the context of the domain of the problem that’s being solved, or some design aspects are developed from a requirement with domain knowledge applied. I don’t know the first thing about boxing speed bags, so time to do some Googling.
The real nugget I unearthed was that a boxer hits a speed bag, and it makes three contacts with the base: once forward (in punch direction), then it comes all the way back (opposite of punch direction) and strikes the base, and then it goes all the way forward again striking the base (in punch direction). Then the boxer punches it on its way back toward the boxer. This actually gives four opportunities to generate movement to the base, once from the shock of the boxer contacting the bag, and then three impacts with the base.
Now, what I see on the waveforms makes more sense. There isn’t a shock of the bag hitting the base once per punch. My second thought was how many punches can a boxer throw at a speed bag per second. Try as I might, I could not find a straight answer to this question. I found lots of websites with maximum shadow boxing punches and actual punches being thrown maximums but not a maximum for a speed bag. Time to derive my own conclusion: I thought about how far the speed bag must travel per punch and concluded that there must be a minimum amount of force to make the bag travel the distance it needs to impact the base three times. Since I’m not a boxer, all I could do is visualize hitting the bag as slowly as possible and it making three contacts. I concluded from the video in my mind’s eye that it would be difficult to hit a bag less than twice per second. OK, that’s a minimum; how about a maximum? Again, I summoned my mind’s eye video and this time moved my fist to strike the imaginary bag. I concluded with the distance the bag needed to travel and the amount of time to move a fist in and out of the path of the bag that about four per second is all that is possible, even with a skilled boxer. OK, it’s settled. I need to find events in the data that are happening between 2 and 4 hertz. Time to get back to coding and developing!
While everyone’s brain works a little differently, I suggest that you try an iterative strategy, especially when you are solving a problem that does not have a clearly defined methodology going into it. I also suggest that when you feel you are ready to make a major tweak to an algorithm, you make a copy of the algorithm before starting to modify the copy, or start with an empty function and start pulling in pieces of the previous iteration. You can use source control to preserve your previous iteration, but I like having the previous iteration(s) in the code so I can easily reference it when working on the next iteration. I usually don’t like to write more than 10 or 20 lines of code without at minimum verifying it complies, but I really want to run it and print something out as confirmation that my logic and assumptions are correct. I’ve done this my entire career and will usually complain if I don’t have target hardware available to actually run what I’m coding. Around 2006, I heard a saying from a former Rear Admiral:
-Wayne Meyers, Rear Admiral, U.S. Navy
I really identify with that statement, as it succinctly states why I always want to keep running and testing what I’m writing. It either allows you to confirm your assumptions or reveals you are heading down the wrong path, allowing you to quickly get on the right path without throwing away a lot of work. This was yet another reason that I chose Java as my prototype platform, as I could quickly start running and testing code plus graph it out visually, in spite of not having the actual speed bag hardware.
Additionally, you will see in the middle of all six
runFx() functions there is code that keeps track of the current time in milliseconds and verifies that the time stamp delta in milliseconds has elapsed or it sleeps for 1 millisecond. This allowed me to watch the data scroll by in my Java plotting window and see how the filtering output looks. I passed in X, Y and Z acceleration data along with an X, Y and Z average value. Since I only used Z data in most algorithms, I started cheating and sending in other values to be plotted, so it’s a little confusing when looking at the graphs of one through five since they don’t match the legend. However, plotting in real time allowed me to see the data and watch the hit counter increment. I could actually see and feel a sense of the rhythm into which the punches were settling and how the acceleration data was being affected by the resonance at prolonged constant rhythm. In addition to the visual output using the Java
System.out.println() function, I can output data to a window in the NetBeans IDE.
If you look in the Java subdirectory in my GitHub repository, there is a file named MainLoop.java. In that file, I have a few functions named
run6(). These were my six major iterations of the speed bag algorithm code.
Here are some highlights for each of the six iterations.
runF1() used only the Z axis, and employed weak bias removal using a sliding window and fixed amplification of the filtered Z data. I created an element called delay, which is a way to delay input data so it could be aligned later with output of averaged results. This allowed the sliding window average to be subtracted from Z axis data based on surrounding values, not by previous values. Punch detection used straight comparison of amplified filter data being greater than average of five samples with a minimum of 250 milliseconds between detections.
runF2() used only Z axis, and employed weak bias removal via a sliding window but added dynamic beta amplification of the filtered Z data based on the average amplitude above the bias that was removed when the last punch was detected. Also, a dynamic minimum time between punches of 225ms to 270ms was calculated based on delta time since last punch was detected. I called the amount of bias removed noise floor. I added a button to stop and resume the simulation so I could examine the debug output and the waveforms. This allowed me to see the beta amplification being used as the simulation went along.
runF3() used X and Z axis data. My theory was that there might be a jolt of movement from the punching action that could be additive to the Z axis data to help pinpoint the actual punch. It was basically the same algorithm as RunF2 but added in the X axis. It actually worked pretty well, and I thought I might be onto something here by correlating X movement and Z. I tried various tweaks and gyrations as you can see in the code lots of commented out experiments. I started playing around with what I call a compressor, which took the sum of five samples to see if it would detect bunches of energy around when punches occur. I didn’t use it in the algorithm but printed out how many times it crossed a threshold to see if it had any potential as a filtering element. In the end, this algorithm started to implode on itself, and it was time to take what I learned and start a new algorithm.
In runF4(), I increased the bias removal average to 50 samples. It started to work in attenuation and sample compression along with a fixed point LSB to preserve some decimal precision to the integer attenuate data. Since one of the requirements was this should be able to run on 8-bit microcontrollers, I wanted to avoid using floating point and time consuming math functions in the final C/C++ code. I’ll speak more to this in the components section, but, for now, know that I’m starting to work this in. I’ve convinced myself that finding bursts of acceleration is the way to go. At this point, I am removing the bias from both Z and X axis then squaring. I then attenuate each, adding the results together but scaling X axis value by 10. I added a second stage of averaging 11 filtered values to start smoothing the bursts of acceleration. Next, when the smoothed value gets above a fixed threshold of 100, the unsmoothed combination of Z and X squared starts getting loaded into the compressor until 100 samples have been added. If the compressor output of the 100 samples is greater than 5000, it is recorded as a hit. A variable time between punches gate is employed, but it is much smaller since the compressor is using 100 samples to encapsulate the punch detection. This lowers the gate time to between 125 and 275 milliseconds. While showing some promise, it was still too sensitive. While one data set would be spot on another would be off by 10 or more punches. After many tweaks and experiments, this algorithm began to implode on itself, and it was once again time to take what I’ve learned and start anew. I should mention that at this tim I’m starting to think there might not be a satisfactory solution to this problem. The resonant vibrations that seem to be out of phase with the contacts of the bag just seems to wreak havoc on the acceleration seen when the boxer gets into a good rhythm. Could this all just be a waste of time?
runF5()’s algorithm started out with the notion that a more formal high pass filter needed to be introduced rather than an average subtracted from the signal. The basic premise of the high pass filter was to use 99% of the value of new samples added to 1% of the value of average. An important concept added towards the end of runF5’s evolution was to try to simplify the algorithm by removing the first stage of processing into its own file to isolate it from later stages. Divide and Conquer; it’s been around forever, and it really holds true time and time again. I tried many experiments as you can see from the many commented out lines in the algorithm and in the FrontEndProcessorOld.java file. In the end, it was time to carry forward the new Front End Processor concept and start anew with divide and conquer and a need for a more formal high pass filter.
With time running out, it’s time to pull together all that has been learned up to now, get the Java code ready to port to C/C++ and implement real filters as opposed to using running averages. In runF6(), I had been pulling together the theory that I need to filter out the bias on the front end with a high pass filter and then try to use a low pass filter on the remaining signal to find bursts of acceleration that occur at a 2 to 4 Hertz frequency. No way was I going to learn how to calculate my own filter tap values to implement the high and low pass filters in the small amount of time left before the deadline. Luckily, I discovered the t-filter web site. Talk about a triple play. Not only was I able to put in my parameters and get filter tap values, I was also able to leverage the C code it generated with a few tweaks in my Java code. Plus, it converted the tap values to fixed point for me! Fully employing the divide and conquer concept, this final version of the algorithm introduced isolated sub algorithms for both Front End Processor and Detection Processing. This allowed me to isolate the two functions from each other except for the output signal of one becoming the input to the other, which enabled me to focus easily on the task at hand rather than sift through a large group of variables where some might be shared between the two stages.
With this division of responsibility, it is now easy to focus on making the clear task of the Front End Processor to remove the bias values and output at a level that is readily acceptable for input into the Detection Processor. Now the Detection processor can clearly focus on filtering and implementing a state machine that can pick out the punch events that should occur between 2 and 4 times per second.
One thing to note is that this final algorithm is much smaller and simpler than some of the previous algorithms. Even though its software, at some point in the process you should still do a technique called Muntzing. Muntzing is a technique to go back and look at what can be removed without breaking the functionality. Every line of code that is removed is one less line of code that can have a bug. You can Google Earl “Madman” Muntz to get a better understanding and feel for the spirit of Muntzing.
Final output of DET
Above is the visual output from runF6. The Green line is 45 samples delayed of the output of the low pass filter, and the yellow line is an average of 99 values of the output of the low pass filter. The Detection Processor includes a detection algorithm that detects punches by tracking min and max crossings of the Green signal using the Yellow signal as a template for dynamic thresholding. Each minimum is a Red spike, and each maximum is a Blue spike, which is also a punch detection. The timescale is in milliseconds. Notice there are about three blue spikes per second inside the 2 to 4Hz range predicted. And the rest is history!
Here is a brief look at each type of component I used in the various algorithms.
This is used to buffer a signal so you can time align it to some other operation. For example, if you average nine samples and you want to subtract the average from the original signal, you can use a delay of five samples of the original signal so you can use values that are itself plus the four samples before and four samples after.
Attenuation is a simple but useful operation that can scale a signal down before it is amplified in some fashion with filtering or some other operation that adds gain to the signal. Typically attenuation is measured in decibels (dB). You can attenuate power or amplitude depending on your application. If you cut the amplitude by half, you are reducing it by -6 dB. If you want to attenuate by other dB values, you can check the dB scale here. As it relates to the Speedbag algorithm, I’m basically trying to create clear gaps in the signal, for instance squelching or squishing smaller values closer to zero so that squaring values later can really push the peaks higher but not having as much effect on the values pushed down towards zero. I used this technique to help accentuate the bursts of acceleration versus background vibrations of the speed bag platform.
Sliding Window Average is a technique of calculating a continuous average of the incoming signal over a given window of samples. The number of samples to be averaged is known as the window size. The way I like to implement a sliding window is to keep a running total of the samples and a ring buffer to keep track of the values. Once the ring buffer is full, the oldest value is removed and replaced with the next incoming value, and the value removed from the ring buffer is subtracted from the new value. That result is added to the running tally. Then simply divide the running total by the window size to get the current average whenever needed.
This is a very simple concept which is to change the sign of the values to all positive or all negative so they are additive. In this case, I used rectification to change all values to positive. As with rectification, you can use a full wave or half wave method. You can easily do full wave by using the
abs() math function that returns the value as positive. You can square values to turn them positive, but you are changing the amplitude. A simple rectify can turn them positive without any other effects. To perform half wave rectification, you can just set any value less than zero to zero.
In the DSP world Compression is typically defined as compressing the amplitudes to keep them in a close range. My compression technique here is to sum up the values in a window of samples. This is a form of down-sampling as you only get one sample out each time the window is filled, but no values are being thrown away. It’s a pure total of the window, or optionally an average of the window. This was employed in a few of the algorithms to try to identify bursts of acceleration from quieter times. I didn’t actually use it in the final algorithm.
Finite Impulse Response (FIR) is a digital filter that is implemented via a number of taps, each with its assigned polynomial coefficient. The number of taps is known as the filter’s order. One strength of the FIR is that it does not use any feedback, so any rounding errors are not cumulative and will not grow larger over time. A finite impulse response simply means that if you input a stream of samples that consisted of a one followed by all zeros, the output of the filter would go to zero within at most the order +1 amount of 0 value samples being fed in. So, the response to that single sample of one lives for a finite amount of samples and is gone. This is essentially achieved by the fact there isn’t any feedback employed. I’ve seen DSP articles claim calculating filter tap size and coefficients is simple, but not to me. I ended up finding an online app called tFilter that saved me a lot of time and aggravation. You pick the type of filter (low, high, bandpass, bandstop, etc) and then setup your frequency ranges and sampling frequency of your input data. You can even pick your coefficients to be produced in fixed point to avoid using floating point math. If you’re not sure how to use fixed point or never heard of it, I’ll talk about that in the Embedded Optimization Techniques section.
Mag Square is a technique that can save computing power of calculating square roots. For example, if you want to calculate the vector for X and Z axis, normally you would do the following: val = sqr((X * X) + (Y * Y)). However, you can simply leave the value in (X * X) + (Y * Y), unless you really need the exact vector value, the Mag Square gives you a usable ratio compared to other vectors calculated on subsequent samples. The numbers will be much larger, and you may want to use attenuation to make them smaller to avoid overflow from additional computation downstream.
I used this technique in the final algorithm to help accentuate the bursts of acceleration from the background vibrations. I only used Z * Z in my calculation, but I then attenuated all the values by half or -6dB to bring them back down to reasonable levels for further processing. For example, after removing the bias if I had some samples around 2 and then some around 10, when I squared those values I now have 4 and 100, a 25 to 1 ratio. Now, if I attenuate by .5, I have 2 and 50, still a 25 to 1 ratio but now with smaller numbers to work with.
Using fixed point numbers is another way to stretch performance, especially on microcontrollers. Fixed point is basically integer math, but it can keep precision via an implied fixed decimal point at a particular bit position in all integers. In the case of my FIR filter, I instructed tFilter to generate polynomial values in 16-bit fixed point values. My motivation for this was to ensure I don’t use more than 32-bit integers, which would especially hurt performance on an 8-bit microcontroller.
Rather than go into the FIR filter code to explain how fixed point works, let me first use a simple example. While the FIR filter algorithm does complex filtering with many polynomials, we could implement a simple filter that outputs the same input signal but -6dB down or half its amplitude. In floating point terms, this would be a simple one tap filter to multiply each incoming sample by 0.5. To do this in fixed point with 16 bit precision, we would need to convert 0.5 into its 16-bit fixed point representation. A value of 1.0 is represented by 1 * (216) or 65,536. Anything less than 65536 is a value less than 1. To create a fixed point integer of 0.5, we simply use the same formula 0.5 * (216), which equals 32,768. Now we can use that value to lower the amplitude by .5 of every sample input. For example, say we input into our simple filter a sample with the value of 10. The filter would calculate 10 * 32768 = 327,680, which is the fixed point representation. If we no longer care about preserving the precision after the calculations are performed, it can easily be turned back into a non-fixed point integer by simply right shifting by the number of bits of precision being used. Thus, 327680 >> 16 = 5. As you can see, our filter changed 10 into 5 which of course is the one half or -6dB we wanted out. I know 0.5 was pretty simple, but if you had wanted 1/8 the amplitude, the same process would be used, 65536 * .125 = 8192. If we input a sample of 16, then 16 * 8192 = 131072, now change it back to an integer 131072 >> 16 = 2. Just to demonstrate how you lose the precision when turning back to integer (the same as going float to integer) if we input 10 into the 1/8th filter it would yield the following, 10 * 8192 = 81920 and then turning it back to integer would be 81920 >> 16 = 1, notice it was 1.25 in fixed point representation.
Getting back to the FIR filters, I picked 16 bits of precision, so I could have a fair amount of precision but balanced with a reasonable amount of whole numbers. Normally, a signed 32-bit integer can have a range of – 2,147,483,648 to +2,147,483,647, however there now are only 16 bits of whole numbers allowed which is a range of -32,768 to +32,767. Since you are now limited in the range of numbers you can use, you need to be cognizant of the values being fed in. If you look at the FEPFilter_get function, you will see there is an accumulator variable accZ which sums the values from each of the taps. Usually if your tap history values are 32 bit, you make your accumulator 64-bit to be sure you can hold the sum of all tap values. However, you can use a 32 bit value if you ensure that your input values are all less than some maximum. One way to calculate your maximum input value is to sum up the absolute values of the coefficients and divide by the maximum integer portion of the fixed point scheme. In the case of the FEP FIR filter, the sum of coefficients was 131646, so if the numbers can be 15 bits of positive whole numbers + 16 bits of fractional numbers, I can use the formula (231)/131646 which gives the FEP maximum input value of + or – 16,312. In this case, another optimization can be realized which is not to have a microcontroller do 64-bit calculations.
Before walking through the processing chain, we should discuss delays caused by filtering. Many types of filtering add delays to the signal being processed. If you do a lot of filtering work, you are probably well aware of this fact, but, if you are not all that experienced with filtering signals, it’s something of which you should be aware. What do I mean by delay? This simply means that if I put in a value X and I get out a value Y, how long it takes for the most impact of X to show up in Y is the delay. In the case of a FIR filter, it can be easily seen by the filter’s Impulse response plot, which, if you remember from my description of FIR filters, is a stream of 0’s with a single 1 inserted. T-Filter shows the impulse response, so you can see how X impacts Y’s output. Below is an image of the FEP’s high pass filter Impulse Response taken from the T-Filter website. Notice in the image that the maximum impact on X is exactly in the middle, and there is a point for each tap in the filter.
Below is a diagram of a few of the FEP’s high pass filter signals. The red signal is the input from the accelerometer or the newest sample going into the filter, the blue signal is the oldest sample in the filter’s ring buffer. There are 19 taps in the FIR filter so they represent a plot of the first and last samples in the filter window. The green signal is the value coming out of the high pass filter. So to relate to my X and Y analogy above, the red signal is X and the green signal is Y. The blue signal is delayed by 36 milliseconds in relation to the red input signal which is exactly 18 samples at 2 milliseconds, this is the window of data that the filter works on and is the Finite amount of time X affects Y.
Notice the output of the high pass filter (green signal) seems to track changes from the input at a delay of 18 milliseconds, which is 9 samples at 2 milliseconds each. So, the most impact from the input signal is seen in the middle of the filter window, which also coincides with the Impulse Response plot where the strongest effects of the 1 value input are seen at the center of the filter window.
It’s not only a FIR that adds delay. Usually, any filtering that is done on a window of samples will cause a delay, and, typically, it will be half the window length. Depending on your application, this delay may or may not have to be accounted for in your design. However, if you want to line this signal up with another unfiltered or less filtered signal, you are going to have to account for it and align it with the use of a delay component.
I’ve talked at length about how to get to a final solution and all the components that made up the solution, so now let’s walk through the processing chain and see how the signal is transformed into one that reveals the punches. The FEP’s main goal is to remove bias and create an output signal that smears across the bursts of acceleration to create a wave that is higher in amplitude during increased acceleration and lower amplitude during times of less acceleration. There are four serial components to the FEP: a High Pass FIR, Attenuator, Rectifier and Smoothing via Sliding Window Average.
The first image is the input and output of the High Pass FIR. Since they are offset by the amount of bias, they don’t overlay very much. The red signal is the input from the accelerometer, and the blue is the output from the FIR. Notice the 1g of acceleration due to gravity is removed and slower changes in the signal are filtered out. If you look between 24,750 and 25,000 milliseconds, you can see the blue signal is more like a straight line with spikes and a slight ringing on it, while the original input has those spikes but meandering on some slow ripple.
Next is the output of the attenuator. While this component works on the entire signal, it lowers the peak values of the signal, but its most important job is to squish the quieter parts of the signal closer to zero values. The image below shows the output of the attenuator, and the input was the output of the High Pass FIR. As expected, peaks are much lower but so is the quieter time. This makes it a little easier to see the acceleration bursts.
Next is the rectifier component. Its job is to turn all the acceleration energy in the positive direction so that it can be used in averaging. For example, an acceleration causing a positive spike of 1000 followed by a negative spike of 990 would yield an average of 5, while a 1000 followed by a positive of 990 would yield an average of 995, a huge difference. Below is an image of the Rectifier output. The bursts of acceleration are slightly more visually apparent, but not easily discernable. In fact, this image shows exactly why this problem is such a tough one to solve; you can clearly see how resonant shaking of the base causes the pattern to change during punch energy being added. The left side is lower and more frequent peaks, the right side has higher but less frequent peaks.
The 49 value sliding window is the final step in the FEP. While we have done subtle changes to the signal that haven’t exactly made the punches jump out in the images, this final stage makes it visually apparent that the signal is well on its way of yielding the hidden punch information. The fruits of the previous signal processing magically show up at this stage. Below is an image of the Sliding Window average. The blue signal is its input or the output of the Rectifier, and the red signal is the output of the sliding window. The red signal is also the final output of the FEP stage of processing. Since it is a window, it has a delay associated with it. Its approximately 22 samples or 44 milliseconds on average. It doesn’t always look that way because sometimes the input signal spikes are suddenly tall with smaller ringing afterwards. Other times there are some small spikes leading up to the tall spikes and that makes the sliding window average output appear inconsistent in its delay based on where the peak of the output shows up. Although these bumps are small, they are now representing where new acceleration energy is being introduced due to punches.
Now it’s time to move on to the Detection Processor (DET). The FEP outputs a signal that is starting to show where the bursts of acceleration are occurring. The DET’s job will be to enhance this signal and employ an algorithm to detect where the punches are occurring.
The first stage of the DET is an attenuator. Eventually, I want to add exponential gain to the signal to really pull up the peaks, but, before doing that, it is important to once again squish down the lower values towards zero and lower the peaks to keep from generating values too large to process in the rest of the DET chain. Below is an image of the output from the attenuator stage, it looks just like the signal output from the FEP, however notice the signal level peaks were above 100 from the FEP, and now peaks are barely over 50. The vertical scale is zoomed in with the max amplitude set to 500 so you can see that there is a viable signal with punch information.
With the signal sufficiently attenuated, it’s time to create the magic. The Magnitude Square function is where it all comes together. The attenuated signal carries the tiny seeds from which I’ll grow towering Redwoods. Below is an image of the Mag Square output, the red signal is the attenuated input, and the blue signal is the mag square output. I’ve had to zoom out to a 3,000 max vertical, and, as you can see, the input signal almost looks flat, yet the mag square was able to pull out unmistakable peaks that will aid the detection algorithm to pick out punches. You might ask why not just use these giant peaks to detect punches. One of the reasons I’ve picked this area of the signal to analyze is to show you how the amount of acceleration can vary greatly as you can see the peak between 25,000 and 25,250 is much smaller than the surrounding peaks, which makes pure thresholding a tough chore.
Next, I decided to put a Low Pass filter to try to remove any fast changing parts of the signal since I’m looking for events that occur in the 2 to 4 Hz range. It was tough on T-Filter to create a tight low pass filter with a 0 to 5 Hz band pass as it was generating filters with over 100 taps, and I didn’t want to take that processing hit, not to mention I would then need a 64-bit accumulator to hold the sum. I relaxed the band pass with a 0 to 19 Hz range and the band stop at 100 to 250 Hz. Below is an image of the low pass filter output. The blue signal is the input, and the red signal is the delayed output. I used this image because it allows the input and output signal to be seen without interfering with each other. The delay is due to 6 sample delay of the low pass FIR, but I have also introduced a 49 sample delay to this signal so that it is aligned in the center of the 99 sample sliding window average that follows in the processing chain. So it is delayed by a total of 55 samples or 110 milliseconds. In this image, you can see the slight amplification of the slow peaks by their height and how it is smoothed as the faster changing elements are attenuated. Not a lot going on here but the signal is a little cleaner, Earl Muntz might suggest I cut the low pass filter out of the circuit, and it might very well work without it.
The final stage of the signal processing is a 99 sample sliding window average. I built into the sliding window average the ability to return the sample in the middle of the window each time a new value is added and that is how I produced the 49 sample delayed signal in the previous image. This is important because the detection algorithm is going to have 2 parallel signals passed into it, the output of the 99 sliding window average and the 49 sample delayed input into the sliding window average. This will perfectly align the un-averaged signal in the middle of the sliding window average. The averaged signal is used as a dynamic threshold for the detection algorithm to use in its detection processing. Here, once again, is the image of the final output from the DET.
In the image, the green and yellow signals are inputs to the detection algorithm, and the blue and red are outputs. As you can see, the green signal, which is a 49 samples delayed, is aligned perfectly with the yellow 99 sliding window average peaks. The detection algorithm monitors the crossing of the yellow by the green signal. This is accomplished by both maximum and minimum start guard state that verifies the signal has moved enough in the minimum or maximum direction in relation to the yellow signal and then switches to a state that monitors the green signal for enough change in direction to declare a maximum or minimum. When the peak start occurs and it’s been at least 260ms since the last detected peak, the state switches to monitor for a new peak in the green signal and also makes the blue spike seen in the image. This is when a punch count is registered. Once a new peak has been detected, the state changes to look for the start of a new minimum. Now, if the green signal falls below the yellow by a delta of 50, the state changes to look for a new minimum of the green signal. Once the green signal minimum is declared, the state changes to start looking for the start of a new peak of the green signal, and a red spike is shown on the image when this occurs.
Again, I’ve picked this time in the recorded data because it shows how the algorithm can track the punches even during big swings in peak amplitude. What’s interesting here is if you look between the 24,750 and 25,000 time frame, you can see the red spike detected a minimum due to the little spike upward of the green signal, which means the state machine started to look for the next start of peak at that point. However, the green signal never crossed the yellow line, so the start of peak state rode the signal all the way down to the floor and waited until the cross of the yellow line just before the 25,250 mark to declare the next start of peak. Additionally, the peak at the 25,250 mark is much lower than the surrounding peaks, but it was still easily detected. Thus, the dynamic thresholding and the state machine logic allows the speed bag punch detector algorithm to “Roll with the Punches”, so to speak.
To sum up, we’ve covered a lot of ground in this article. First, the importance of fully understanding the problem as it relates to the required end item along with the domain knowledge needed to get there. Second, for a problem of this nature creating a scaffold environment to build the algorithm was imperative, and in this instance, it was the Java prototype with visual display of the signals. Third, was implement for the target environment, on a PC you have wonderful optimizing compilers for powerful CPUs with tons of cache, for a microcontroller the optimization is really left to you. Use every optimization trick you know to keep processing as quick as possible. Fourth, iterative development can help you on problems like this. Keep reworking the problem while folding in the knowledge you are learning during the development process.
When I look back on this project and think about what ultimately made me successful, I can think of two main things. Creating the right tools for the job was invaluable. Being able to see how my processing components were affecting the signal was really invaluable. Not only plotting the output signal, but having it plot in realtime, allowed me to fully understand the acceleration being generated. It was as if Nate was in the corner punching the bag, and I was watching the waveform roll in on my screen. However, the biggest factor was realizing that in the end I am looking for something that happens 2 to 4 times per second. I latched on to that and relentlessly pursued how to translate the raw incoming signal into something that would show those events. There was nothing for me to Google to find that answer. Remember knowledge doesn’t really come from books, it gets recorded in books. First, someone had to go off script and discover something and then it becomes knowledge. Apply the knowledge you have and can find, but don’t be afraid to use your imagination to try what hasn’t been tried before to solve an unsolved problem. So remember in the future, metaphorically when you come to the end of the paved road. Will you turn around looking for a road already paved ,or will you lock in the hubs and keep plowing ahead to make your own discovery. I wasn’t able to just Google how to count punches with an accelerometer, but now someone can.
You never know when you’ll need a capacitor. Sometimes you need a little more power supply decoupling, an output coupling cap, or careful tuning of a filter circuit – all applications where capacitors are critical. The SparkFun Capacitor Kit contains a wide range of capacitor values, so you will always have them on hand when you need them.
This tutorial will help you identify the contents of your kit, and show you a couple tricks to expand the range of values even further.
The Capacitor Kit contains caps on decade intervals from 10 picofarads to 1000 microfarads.
|Capacitor Kit Contents|
There are ten pieces of most values, but 25 pieces of 100 nanofarads, which are commonly used for local supply decoupling near ICs. There are also ten pieces of 22pf, which are frequently used as load capacitors when building crystal oscillators.
Let’s face it, a Farad is a lot of capacitance. Capacitor values are usually tiny – often in the millionths or billionths of a Farad. To express those small values succinctly, we use the metric system. The following prefixes are the modern convention.
|Capacitor Metric Prefixes|
The smaller values in the kit are 50V rated ceramic capacitors. These are small, nonpolarized caps with yellow blob for their body.
From Left to Right: 10 pF, 22 pF, 100 pF, 1 nF, 10 nF, 100 nF
The value is printed on each in a three-digit code. This code is similar to the color code on resistors, but uses digits instead of colors. The first two digits are the two most significant digits of the value, and the third digit is the exponent on the 10. The value is expressed in terms of pico-Farads.
To decode the value, take the first two digits, then follow them with the number of zeros indicated by the third digit. 104 becomes “10” followed by “0000,” or 100000 pF, more succinctly written as 100 nF.
Electrolytic caps have larger, cylindrical bodies that look like small soda cans. They typically offer higher capacitance than ceramic caps. Unlike ceramics, they are polarized.
From Left to Right: 1µF, 10µF, 100µF, 1000µF
The markings on the ‘lytic caps are easily legible – the value and units are printed right on the body.
The value is followed with the voltage rating, indicating the maximum DC potential that the cap can withstand without damage. In this kit, the 1 µF is rated to 50V, the others are rated to 25V.
The higher capacitance of electrolytics comes with a somewhat tedious detail – they are polarized. The positive leg needs to be kept at a higher DC potential than the negative leg. If they’re installed backwards, they’re prone toexplode.
Thankfully, the leads are clearly marked.
There are two polarity indicators on an electrolytic cap:
The kit specifically includes 22 pF ceramic caps for building cyrstal oscillators, commonly required by microcontroller ICs.
The crystal oscillator circuit from the ProMicro
This kit offers a wide array of values, but the decade-by-decade selection leaves some gaps in between. There are a couple of tricks that can be used to bridge those gaps, by combining caps in series or parallel.
The values of capacitors wired in parallel are added together. You can gang up smaller caps to effectively form a larger cap.
Capacitors wired in series combine in an inverse sum – take the reciprocal of each value, and add them together, then take the reciprocal of that sum.
Restated as a simplified guidelines while you’re at your workbench:
Pull-up resistors are very common when using microcontrollers (MCUs) or any digital logic device. This tutorial will explain when and where to use pull-up resistors, then we will do a simple calculation to show why pull-ups are important.
Let’s say you have an MCU with one pin configured as an input. If there is nothing connected to the pin and your program reads the state of the pin, will it be high (pulled to VCC) or low (pulled to ground)? It is difficult to tell. This phenomena is referred to as floating. To prevent this unknown state, a pull-up or pull-down resistor will ensure that the pin is in either a high or low state, while also using a low amount of current.
For simplicity, we will focus on pull-ups since they are more common than pull-downs. They operate using the same concepts, except the pull-up resistor is connected to the high voltage (this is usually 3.3V or 5V and is often refereed to as VCC) and the pull-down resistor is connected to ground.
Pull-ups are often used with buttons and switches.
With a pull-up resistor, the input pin will read a high state when the button is not pressed. In other words, a small amount of current is flowing between VCC and the input pin (not to ground), thus the input pin reads close to VCC. When the button is pressed, it connects the input pin directly to ground. The current flows through the resistor to ground, thus the input pin reads a low state. Keep in mind, if the resistor wasn’t there, your button would connect VCC to ground, which is very bad and is also known as a short.
So what value resistor should you choose?
The short and easy answer is that you want a resistor value on the order of 10kΩ for the pull-up.
A low resistor value is called a strong pull-up (more current flows), a high resistor value is called a weak pull-up (less current flows).
The value of the pull-up resistor needs to be chosen to satisfy two conditions:
For condition 1, you don’t want the resistor’s value too low. The lower the resistance, the more power will be used when the button is hit. You generally want a large resistor value (10kΩ), but you don’t want it too large as to conflict with condition 2. A 4MΩ resistor might work as a pull-up, but its resistance is so large (or weak) that it may not do its job 100% of the time.
The general rule for condition 2 is to use a pull-up resistor (R1) that is an order of magnitude (1/10th) less than the input impedance (R2) of the input pin. An input pin on a microcontroller has an impedance that can vary from 100k-1MΩ. For this discussion, impedance is just a fancy way of saying resistance and is represented by R2 in the picture above. So, when the button is not pressed, a very small amount of current flows from VCC through R1 and into the input pin. The pull-up resistor R1 and input pin impedance R2 divides the voltage, and this voltage needs to be high enough for the input pin to read a high state.
For example, if you use a 1MΩ resistor for the pull-up R1 and the input pin’s impedance R2 is on the order of 1MΩ (forming a voltage divider), the voltage on the input pin is going to be around half of VCC, and the microcontroller might not register the pin being in a high state. On a 5V system, what does the MCU read on the input pin if the voltage is 2.5V? Is it a high or a low? The MCU doesn’t know and you might read either a high or a low. A resistance of 10k to 100kΩ for R1 should avoid most problems.
Since pull-up resistors are so commonly needed, many MCUs, like the ATmega328 microcontroller on the Arduino platform, have internal pull-ups that can be enabled and disabled. To enable internal pull-ups on an Arduino, you can use the following line of code in your setup() function:
pinMode(5, INPUT_PULLUP); // Enable internal pull-up resistor on pin 5
Another thing to point out is that the larger the resistance for the pull-up, the slower the pin is to respond to voltage changes. This is because the system that feeds the input pin is essentially a capacitor coupled with the pull-up resistor, thus forming a RC filter, and RC filters take some time to charge and discharge. If you have a really fast changing signal (like USB), a high value pull-up resistor can limit the speed at which the pin can reliably change state. This is why you will often see 1k to 4.7KΩ resistors on USB signal lines.
All of these factors play into the decision on what value pull-up resistor to use.
Let’s say you want to limit the current to approximately 1mA when the button is pressed in the circuit above, where Vcc = 5V. What resistor value should you use?
It is easy to show how to calculate the pull-up resistor using Ohm’s Law:
Referring to the schematic above, Ohm’s Law now is:
Rearrange the above equation with some simple algebra to solve for the resistor:
Remember to convert all of your units into volts, amps and Ohms before calculating (e.g. 1mA = 0.001 Amps). The solution is to use a 5kΩ resistor.
Now you should be familiar with what a pull-up resistor is and how it works.