Category Archives: 闲话创客

Lessons in Algorithms

Introduction

Earlier this year, Nathan Seidle, founder of SparkFun, created the Crowdsourcing Algorithms Challenge (aka, the Speed Bag Challenge). After numerous fantastic entries, one was chosen. The winner, Barry Hannigan, was asked to write up his process involved in solving this problem. This article is Barry Hannigan’s winning approach to solving real-world problems, even when the problem is not tangibly in front of you.

Firmware Resources

You can view Barry’s code by clicking the link below.

 

BARRY'S SPEED BAG CHALLENGE GITHUB REPO


As the winner of Nate’s Speed Bag Challenge, I had the wonderful opportunity to meet with Nate at SparkFun’s headquarters in Boulder, CO. During our discussions, we thought it would be a good idea to create a tutorial describing how to go about solving a complex problem in an extremely short amount of time. While I’ll discuss specifics to this project, my hope is that you’ll be able to apply the thought process to your future projects—big or small.

Where to Start

In full-fledged software projects, from an Engineer’s perspective, you have four major phases:

  • Requirements
  • Design
  • Implementation
  • Test

Let’s face it; the design and coding is what everyone sees as interesting, where their creative juices can flow and the majority of the fun can be had. Naturally, there is the tendency to fixate on a certain aspect of the problem being solved and jump right in designing and coding. However, I will argue that the first and last phase can be the most important in any successful project, be it large or small. If you doubt that, consider this: my solution to the speed bag problem was designed wickedly fast, and I didn’t have a bag to test it on. But, with the right fixes applied in the end, the functionality was tested to verify that it produced the correct results. Conversely, a beautiful design and elegant implementation that doesn’t produce the required functionality will surely be considered a failure.

I didn’t mention prototype as a phase, because depending on the project it can happen in different phases or multiple phases. For instance, if the problem isn’t fully understood, a prototype can help figure out the requirements, or it can provide a proof of concept, or it can verify the use of a new technology. While important, prototyping is really an activity in one or more phases.

Getting back to the Speed Bag Challenge, in this particular case, even though it is a very small project, I suggest that you spend a little time in each of the four areas, or you will have a high probability of missing something important. To get a full understanding of what’s required, let’s survey everything we had available as inputs. The web post for the challenge listed five explicit requirements, which you can find here. Next, there was a link to Nate’s Github repository that had information on the recorded data format and a very brief explanation of how the speed bag device would work.

In this case, I would categorize what Nate did with the first speed bag counter implementation as a prototype to help reveal additional requirements. From Nate’s write-up on how he built the system, we know it used an accelerometer attached to the base of a speed bag and that the vibration data samples about every 2ms are to be used to count punches. We also now know that applying a polynomial smoothing function and looking for peaks above a threshold doesn’t accurately detect punches.

While trying not to be too formal for a small project, I kept these objectives (requirements) in mind while working the problem:

  • The algorithm shall be able to produce the correct number of hits from the recorded data sets
  • The solution shall be able to run on 8-bit and 32-bit micros
  • Produce documentation and help others learn from the solution put forth
  • Put code and documents in a public repository or website
  • Disclose the punch count and the solution produced for the Mystery data sets
  • Accelerometer attached to top of speed bag base, orientation unknown except +Z is up -Z is down
  • Complex data patterns will need more than polynomial filtering; you need to adjust to incoming data amplitude variations—as Nate suspects, resonance is the likely culprit
  • You have 15 days to complete (Yikes!)

Creating the Solution

As it goes in all projects, now that you know what should be done, the realization that there isn’t enough time sets in. Since I didn’t have the real hardware and needed to be able to visually see the output of my algorithm, I started working it out quickly in Java on my PC. I built in a way to plot the results of the waveforms on my screen. I’ve been using NetBeans for years to do Java development, so I started a new speed bag project. I always use JFreeChart library to plot data, so I added it to my project. Netbeans has a really good IDE and built-in GUI designer. All I had to do was create a GUI layout with a blank panel where I want the JFreeChart to display and then, at run time, create the JFreeChart object and add it to the panel. All the oscilloscope diagrams in this article were created by the JFreeChart display. Here is an image from my quick and dirty oscilloscope GUI design page.

NetBeans IDE

This algorithm was needed in a hurry, so my first pass is to be very object oriented and use every shortcut afforded me by using Java. Then, I’ll make it more C like in nature as I nail down the algorithm sequence. I jumped right in and plotted the X, Y and Z wave forms as they came from the recorded results. Once I got a look at the raw data, I decided to remove any biases first (i.e., gravity) and then sum the square of each waveform and take the square root. I added some smoothing by way of averaging a small number of values and employed a minimum time between threshold crossings to help filter out spikes. All in all, this seemed to make the data even worse on the plot. I decided to throw away X and Y, since I didn’t know in what orientation it was mounted and if it would be mounted the same on different speed bag platforms anyway. To my horror, even with just the Z axis, it still just looked like a mess of noise! I’m seeing peaks in the data way too close together. Only my minimum time between thresholds gate is helping make some sense of the punch count, but there really isn’t anything concrete in the data. Something’s not adding up. What am I missing?

Below is an image of the runF1 waveform. The blue signal is the filtered z axis, and the red line is a threshold for counting punches. As I mentioned, if it weren’t for my 250ms minimum between punch detections, my counter would be going crazy. Notice the way I have introduced two 5 millisecond delays in my runF1() processing so thresholding would be a little better if the red line were moved to the right by 10 milliseconds. I’ll talk more about aligning signals later in this article, but you can see in this image how time aligning signals is crucial for getting accurate results.

First filter with many peaks

The blue signal is the filtered z axis, and the red line is a threshold for counting punches.

If you look at the virtual oscilloscope output, you can see that between millisecond 25,000 and 26,000, which is 1 second in time, there are around nine distinct acceleration events. No way Nate is throwing nine punches in a second. Exactly how many punches should I expect to see per second? Back to the drawing board. I need another approach. Remember humility is your friend; if you just rush in on your high horse you usually will be knocked off it in a hurry.

Understand the Domain

Typically the requirements are drafted in the context of the domain of the problem that’s being solved, or some design aspects are developed from a requirement with domain knowledge applied. I don’t know the first thing about boxing speed bags, so time to do some Googling.

The real nugget I unearthed was that a boxer hits a speed bag, and it makes three contacts with the base: once forward (in punch direction), then it comes all the way back (opposite of punch direction) and strikes the base, and then it goes all the way forward again striking the base (in punch direction). Then the boxer punches it on its way back toward the boxer. This actually gives four opportunities to generate movement to the base, once from the shock of the boxer contacting the bag, and then three impacts with the base.

Now, what I see on the waveforms makes more sense. There isn’t a shock of the bag hitting the base once per punch. My second thought was how many punches can a boxer throw at a speed bag per second. Try as I might, I could not find a straight answer to this question. I found lots of websites with maximum shadow boxing punches and actual punches being thrown maximums but not a maximum for a speed bag. Time to derive my own conclusion: I thought about how far the speed bag must travel per punch and concluded that there must be a minimum amount of force to make the bag travel the distance it needs to impact the base three times. Since I’m not a boxer, all I could do is visualize hitting the bag as slowly as possible and it making three contacts. I concluded from the video in my mind’s eye that it would be difficult to hit a bag less than twice per second. OK, that’s a minimum; how about a maximum? Again, I summoned my mind’s eye video and this time moved my fist to strike the imaginary bag. I concluded with the distance the bag needed to travel and the amount of time to move a fist in and out of the path of the bag that about four per second is all that is possible, even with a skilled boxer. OK, it’s settled. I need to find events in the data that are happening between 2 and 4 hertz. Time to get back to coding and developing!

Build a little, Test a little, Learn a lot

While everyone’s brain works a little differently, I suggest that you try an iterative strategy, especially when you are solving a problem that does not have a clearly defined methodology going into it. I also suggest that when you feel you are ready to make a major tweak to an algorithm, you make a copy of the algorithm before starting to modify the copy, or start with an empty function and start pulling in pieces of the previous iteration. You can use source control to preserve your previous iteration, but I like having the previous iteration(s) in the code so I can easily reference it when working on the next iteration. I usually don’t like to write more than 10 or 20 lines of code without at minimum verifying it complies, but I really want to run it and print something out as confirmation that my logic and assumptions are correct. I’ve done this my entire career and will usually complain if I don’t have target hardware available to actually run what I’m coding. Around 2006, I heard a saying from a former Rear Admiral:

 

Build a little, Test a little, Learn a lot.

-Wayne Meyers, Rear Admiral, U.S. Navy

I really identify with that statement, as it succinctly states why I always want to keep running and testing what I’m writing. It either allows you to confirm your assumptions or reveals you are heading down the wrong path, allowing you to quickly get on the right path without throwing away a lot of work. This was yet another reason that I chose Java as my prototype platform, as I could quickly start running and testing code plus graph it out visually, in spite of not having the actual speed bag hardware.

Additionally, you will see in the middle of all six runFx() functions there is code that keeps track of the current time in milliseconds and verifies that the time stamp delta in milliseconds has elapsed or it sleeps for 1 millisecond. This allowed me to watch the data scroll by in my Java plotting window and see how the filtering output looks. I passed in X, Y and Z acceleration data along with an X, Y and Z average value. Since I only used Z data in most algorithms, I started cheating and sending in other values to be plotted, so it’s a little confusing when looking at the graphs of one through five since they don’t match the legend. However, plotting in real time allowed me to see the data and watch the hit counter increment. I could actually see and feel a sense of the rhythm into which the punches were settling and how the acceleration data was being affected by the resonance at prolonged constant rhythm. In addition to the visual output using the Java System.out.println() function, I can output data to a window in the NetBeans IDE.

If you look in the Java subdirectory in my GitHub repository, there is a file named MainLoop.java. In that file, I have a few functions named run1() through run6(). These were my six major iterations of the speed bag algorithm code.

Here are some highlights for each of the six iterations.

runF1

runF1() used only the Z axis, and employed weak bias removal using a sliding window and fixed amplification of the filtered Z data. I created an element called delay, which is a way to delay input data so it could be aligned later with output of averaged results. This allowed the sliding window average to be subtracted from Z axis data based on surrounding values, not by previous values. Punch detection used straight comparison of amplified filter data being greater than average of five samples with a minimum of 250 milliseconds between detections.

runF2

runF2() used only Z axis, and employed weak bias removal via a sliding window but added dynamic beta amplification of the filtered Z data based on the average amplitude above the bias that was removed when the last punch was detected. Also, a dynamic minimum time between punches of 225ms to 270ms was calculated based on delta time since last punch was detected. I called the amount of bias removed noise floor. I added a button to stop and resume the simulation so I could examine the debug output and the waveforms. This allowed me to see the beta amplification being used as the simulation went along.

runF3

runF3() used X and Z axis data. My theory was that there might be a jolt of movement from the punching action that could be additive to the Z axis data to help pinpoint the actual punch. It was basically the same algorithm as RunF2 but added in the X axis. It actually worked pretty well, and I thought I might be onto something here by correlating X movement and Z. I tried various tweaks and gyrations as you can see in the code lots of commented out experiments. I started playing around with what I call a compressor, which took the sum of five samples to see if it would detect bunches of energy around when punches occur. I didn’t use it in the algorithm but printed out how many times it crossed a threshold to see if it had any potential as a filtering element. In the end, this algorithm started to implode on itself, and it was time to take what I learned and start a new algorithm.

runF4

In runF4(), I increased the bias removal average to 50 samples. It started to work in attenuation and sample compression along with a fixed point LSB to preserve some decimal precision to the integer attenuate data. Since one of the requirements was this should be able to run on 8-bit microcontrollers, I wanted to avoid using floating point and time consuming math functions in the final C/C++ code. I’ll speak more to this in the components section, but, for now, know that I’m starting to work this in. I’ve convinced myself that finding bursts of acceleration is the way to go. At this point, I am removing the bias from both Z and X axis then squaring. I then attenuate each, adding the results together but scaling X axis value by 10. I added a second stage of averaging 11 filtered values to start smoothing the bursts of acceleration. Next, when the smoothed value gets above a fixed threshold of 100, the unsmoothed combination of Z and X squared starts getting loaded into the compressor until 100 samples have been added. If the compressor output of the 100 samples is greater than 5000, it is recorded as a hit. A variable time between punches gate is employed, but it is much smaller since the compressor is using 100 samples to encapsulate the punch detection. This lowers the gate time to between 125 and 275 milliseconds. While showing some promise, it was still too sensitive. While one data set would be spot on another would be off by 10 or more punches. After many tweaks and experiments, this algorithm began to implode on itself, and it was once again time to take what I’ve learned and start anew. I should mention that at this tim I’m starting to think there might not be a satisfactory solution to this problem. The resonant vibrations that seem to be out of phase with the contacts of the bag just seems to wreak havoc on the acceleration seen when the boxer gets into a good rhythm. Could this all just be a waste of time?

runF5

runF5()’s algorithm started out with the notion that a more formal high pass filter needed to be introduced rather than an average subtracted from the signal. The basic premise of the high pass filter was to use 99% of the value of new samples added to 1% of the value of average. An important concept added towards the end of runF5’s evolution was to try to simplify the algorithm by removing the first stage of processing into its own file to isolate it from later stages. Divide and Conquer; it’s been around forever, and it really holds true time and time again. I tried many experiments as you can see from the many commented out lines in the algorithm and in the FrontEndProcessorOld.java file. In the end, it was time to carry forward the new Front End Processor concept and start anew with divide and conquer and a need for a more formal high pass filter.

runF6

With time running out, it’s time to pull together all that has been learned up to now, get the Java code ready to port to C/C++ and implement real filters as opposed to using running averages. In runF6(), I had been pulling together the theory that I need to filter out the bias on the front end with a high pass filter and then try to use a low pass filter on the remaining signal to find bursts of acceleration that occur at a 2 to 4 Hertz frequency. No way was I going to learn how to calculate my own filter tap values to implement the high and low pass filters in the small amount of time left before the deadline. Luckily, I discovered the t-filter web site. Talk about a triple play. Not only was I able to put in my parameters and get filter tap values, I was also able to leverage the C code it generated with a few tweaks in my Java code. Plus, it converted the tap values to fixed point for me! Fully employing the divide and conquer concept, this final version of the algorithm introduced isolated sub algorithms for both Front End Processor and Detection Processing. This allowed me to isolate the two functions from each other except for the output signal of one becoming the input to the other, which enabled me to focus easily on the task at hand rather than sift through a large group of variables where some might be shared between the two stages.

With this division of responsibility, it is now easy to focus on making the clear task of the Front End Processor to remove the bias values and output at a level that is readily acceptable for input into the Detection Processor. Now the Detection processor can clearly focus on filtering and implementing a state machine that can pick out the punch events that should occur between 2 and 4 times per second.

One thing to note is that this final algorithm is much smaller and simpler than some of the previous algorithms. Even though its software, at some point in the process you should still do a technique called Muntzing. Muntzing is a technique to go back and look at what can be removed without breaking the functionality. Every line of code that is removed is one less line of code that can have a bug. You can Google Earl “Madman” Muntz to get a better understanding and feel for the spirit of Muntzing.

Final output of DET

Final output of DET

Above is the visual output from runF6. The Green line is 45 samples delayed of the output of the low pass filter, and the yellow line is an average of 99 values of the output of the low pass filter. The Detection Processor includes a detection algorithm that detects punches by tracking min and max crossings of the Green signal using the Yellow signal as a template for dynamic thresholding. Each minimum is a Red spike, and each maximum is a Blue spike, which is also a punch detection. The timescale is in milliseconds. Notice there are about three blue spikes per second inside the 2 to 4Hz range predicted. And the rest is history!

Algorithm Components

Here is a brief look at each type of component I used in the various algorithms.

Delay

This is used to buffer a signal so you can time align it to some other operation. For example, if you average nine samples and you want to subtract the average from the original signal, you can use a delay of five samples of the original signal so you can use values that are itself plus the four samples before and four samples after.

Attenuate

Attenuation is a simple but useful operation that can scale a signal down before it is amplified in some fashion with filtering or some other operation that adds gain to the signal. Typically attenuation is measured in decibels (dB). You can attenuate power or amplitude depending on your application. If you cut the amplitude by half, you are reducing it by -6 dB. If you want to attenuate by other dB values, you can check the dB scale here. As it relates to the Speedbag algorithm, I’m basically trying to create clear gaps in the signal, for instance squelching or squishing smaller values closer to zero so that squaring values later can really push the peaks higher but not having as much effect on the values pushed down towards zero. I used this technique to help accentuate the bursts of acceleration versus background vibrations of the speed bag platform.

Sliding Window Average

Sliding Window Average is a technique of calculating a continuous average of the incoming signal over a given window of samples. The number of samples to be averaged is known as the window size. The way I like to implement a sliding window is to keep a running total of the samples and a ring buffer to keep track of the values. Once the ring buffer is full, the oldest value is removed and replaced with the next incoming value, and the value removed from the ring buffer is subtracted from the new value. That result is added to the running tally. Then simply divide the running total by the window size to get the current average whenever needed.

Rectify

This is a very simple concept which is to change the sign of the values to all positive or all negative so they are additive. In this case, I used rectification to change all values to positive. As with rectification, you can use a full wave or half wave method. You can easily do full wave by using the abs() math function that returns the value as positive. You can square values to turn them positive, but you are changing the amplitude. A simple rectify can turn them positive without any other effects. To perform half wave rectification, you can just set any value less than zero to zero.

Compression

In the DSP world Compression is typically defined as compressing the amplitudes to keep them in a close range. My compression technique here is to sum up the values in a window of samples. This is a form of down-sampling as you only get one sample out each time the window is filled, but no values are being thrown away. It’s a pure total of the window, or optionally an average of the window. This was employed in a few of the algorithms to try to identify bursts of acceleration from quieter times. I didn’t actually use it in the final algorithm.

FIR Filter

Finite Impulse Response (FIR) is a digital filter that is implemented via a number of taps, each with its assigned polynomial coefficient. The number of taps is known as the filter’s order. One strength of the FIR is that it does not use any feedback, so any rounding errors are not cumulative and will not grow larger over time. A finite impulse response simply means that if you input a stream of samples that consisted of a one followed by all zeros, the output of the filter would go to zero within at most the order +1 amount of 0 value samples being fed in. So, the response to that single sample of one lives for a finite amount of samples and is gone. This is essentially achieved by the fact there isn’t any feedback employed. I’ve seen DSP articles claim calculating filter tap size and coefficients is simple, but not to me. I ended up finding an online app called tFilter that saved me a lot of time and aggravation. You pick the type of filter (low, high, bandpass, bandstop, etc) and then setup your frequency ranges and sampling frequency of your input data. You can even pick your coefficients to be produced in fixed point to avoid using floating point math. If you’re not sure how to use fixed point or never heard of it, I’ll talk about that in the Embedded Optimization Techniques section.

Embedded Optimization Techniques

Magnitude Squared

Mag Square is a technique that can save computing power of calculating square roots. For example, if you want to calculate the vector for X and Z axis, normally you would do the following: val = sqr((X * X) + (Y * Y)). However, you can simply leave the value in (X * X) + (Y * Y), unless you really need the exact vector value, the Mag Square gives you a usable ratio compared to other vectors calculated on subsequent samples. The numbers will be much larger, and you may want to use attenuation to make them smaller to avoid overflow from additional computation downstream.

I used this technique in the final algorithm to help accentuate the bursts of acceleration from the background vibrations. I only used Z * Z in my calculation, but I then attenuated all the values by half or -6dB to bring them back down to reasonable levels for further processing. For example, after removing the bias if I had some samples around 2 and then some around 10, when I squared those values I now have 4 and 100, a 25 to 1 ratio. Now, if I attenuate by .5, I have 2 and 50, still a 25 to 1 ratio but now with smaller numbers to work with.

Fixed Point

Using fixed point numbers is another way to stretch performance, especially on microcontrollers. Fixed point is basically integer math, but it can keep precision via an implied fixed decimal point at a particular bit position in all integers. In the case of my FIR filter, I instructed tFilter to generate polynomial values in 16-bit fixed point values. My motivation for this was to ensure I don’t use more than 32-bit integers, which would especially hurt performance on an 8-bit microcontroller.

Rather than go into the FIR filter code to explain how fixed point works, let me first use a simple example. While the FIR filter algorithm does complex filtering with many polynomials, we could implement a simple filter that outputs the same input signal but -6dB down or half its amplitude. In floating point terms, this would be a simple one tap filter to multiply each incoming sample by 0.5. To do this in fixed point with 16 bit precision, we would need to convert 0.5 into its 16-bit fixed point representation. A value of 1.0 is represented by 1 * (216) or 65,536. Anything less than 65536 is a value less than 1. To create a fixed point integer of 0.5, we simply use the same formula 0.5 * (216), which equals 32,768. Now we can use that value to lower the amplitude by .5 of every sample input. For example, say we input into our simple filter a sample with the value of 10. The filter would calculate 10 * 32768 = 327,680, which is the fixed point representation. If we no longer care about preserving the precision after the calculations are performed, it can easily be turned back into a non-fixed point integer by simply right shifting by the number of bits of precision being used. Thus, 327680 >> 16 = 5. As you can see, our filter changed 10 into 5 which of course is the one half or -6dB we wanted out. I know 0.5 was pretty simple, but if you had wanted 1/8 the amplitude, the same process would be used, 65536 * .125 = 8192. If we input a sample of 16, then 16 * 8192 = 131072, now change it back to an integer 131072 >> 16 = 2. Just to demonstrate how you lose the precision when turning back to integer (the same as going float to integer) if we input 10 into the 1/8th filter it would yield the following, 10 * 8192 = 81920 and then turning it back to integer would be 81920 >> 16 = 1, notice it was 1.25 in fixed point representation.

Getting back to the FIR filters, I picked 16 bits of precision, so I could have a fair amount of precision but balanced with a reasonable amount of whole numbers. Normally, a signed 32-bit integer can have a range of - 2,147,483,648 to +2,147,483,647, however there now are only 16 bits of whole numbers allowed which is a range of -32,768 to +32,767. Since you are now limited in the range of numbers you can use, you need to be cognizant of the values being fed in. If you look at the FEPFilter_get function, you will see there is an accumulator variable accZ which sums the values from each of the taps. Usually if your tap history values are 32 bit, you make your accumulator 64-bit to be sure you can hold the sum of all tap values. However, you can use a 32 bit value if you ensure that your input values are all less than some maximum. One way to calculate your maximum input value is to sum up the absolute values of the coefficients and divide by the maximum integer portion of the fixed point scheme. In the case of the FEP FIR filter, the sum of coefficients was 131646, so if the numbers can be 15 bits of positive whole numbers + 16 bits of fractional numbers, I can use the formula (231)/131646 which gives the FEP maximum input value of + or - 16,312. In this case, another optimization can be realized which is not to have a microcontroller do 64-bit calculations.

Walking the Signal Processing Chain

Delays Due to Filtering

Before walking through the processing chain, we should discuss delays caused by filtering. Many types of filtering add delays to the signal being processed. If you do a lot of filtering work, you are probably well aware of this fact, but, if you are not all that experienced with filtering signals, it’s something of which you should be aware. What do I mean by delay? This simply means that if I put in a value X and I get out a value Y, how long it takes for the most impact of X to show up in Y is the delay. In the case of a FIR filter, it can be easily seen by the filter’s Impulse response plot, which, if you remember from my description of FIR filters, is a stream of 0’s with a single 1 inserted. T-Filter shows the impulse response, so you can see how X impacts Y’s output. Below is an image of the FEP’s high pass filter Impulse Response taken from the T-Filter website. Notice in the image that the maximum impact on X is exactly in the middle, and there is a point for each tap in the filter.

Impulse response from T-Filter

Below is a diagram of a few of the FEP’s high pass filter signals. The red signal is the input from the accelerometer or the newest sample going into the filter, the blue signal is the oldest sample in the filter’s ring buffer. There are 19 taps in the FIR filter so they represent a plot of the first and last samples in the filter window. The green signal is the value coming out of the high pass filter. So to relate to my X and Y analogy above, the red signal is X and the green signal is Y. The blue signal is delayed by 36 milliseconds in relation to the red input signal which is exactly 18 samples at 2 milliseconds, this is the window of data that the filter works on and is the Finite amount of time X affects Y.

Delayed Signal Example

Notice the output of the high pass filter (green signal) seems to track changes from the input at a delay of 18 milliseconds, which is 9 samples at 2 milliseconds each. So, the most impact from the input signal is seen in the middle of the filter window, which also coincides with the Impulse Response plot where the strongest effects of the 1 value input are seen at the center of the filter window.

It’s not only a FIR that adds delay. Usually, any filtering that is done on a window of samples will cause a delay, and, typically, it will be half the window length. Depending on your application, this delay may or may not have to be accounted for in your design. However, if you want to line this signal up with another unfiltered or less filtered signal, you are going to have to account for it and align it with the use of a delay component.

Front End Processor

I’ve talked at length about how to get to a final solution and all the components that made up the solution, so now let’s walk through the processing chain and see how the signal is transformed into one that reveals the punches. The FEP’s main goal is to remove bias and create an output signal that smears across the bursts of acceleration to create a wave that is higher in amplitude during increased acceleration and lower amplitude during times of less acceleration. There are four serial components to the FEP: a High Pass FIR, Attenuator, Rectifier and Smoothing via Sliding Window Average.

The first image is the input and output of the High Pass FIR. Since they are offset by the amount of bias, they don’t overlay very much. The red signal is the input from the accelerometer, and the blue is the output from the FIR. Notice the 1g of acceleration due to gravity is removed and slower changes in the signal are filtered out. If you look between 24,750 and 25,000 milliseconds, you can see the blue signal is more like a straight line with spikes and a slight ringing on it, while the original input has those spikes but meandering on some slow ripple.

FEP Highpass In Out

Next is the output of the attenuator. While this component works on the entire signal, it lowers the peak values of the signal, but its most important job is to squish the quieter parts of the signal closer to zero values. The image below shows the output of the attenuator, and the input was the output of the High Pass FIR. As expected, peaks are much lower but so is the quieter time. This makes it a little easier to see the acceleration bursts.

FEP Atten Out

Next is the rectifier component. Its job is to turn all the acceleration energy in the positive direction so that it can be used in averaging. For example, an acceleration causing a positive spike of 1000 followed by a negative spike of 990 would yield an average of 5, while a 1000 followed by a positive of 990 would yield an average of 995, a huge difference. Below is an image of the Rectifier output. The bursts of acceleration are slightly more visually apparent, but not easily discernable. In fact, this image shows exactly why this problem is such a tough one to solve; you can clearly see how resonant shaking of the base causes the pattern to change during punch energy being added. The left side is lower and more frequent peaks, the right side has higher but less frequent peaks.

FEP Rectifier Out

The 49 value sliding window is the final step in the FEP. While we have done subtle changes to the signal that haven’t exactly made the punches jump out in the images, this final stage makes it visually apparent that the signal is well on its way of yielding the hidden punch information. The fruits of the previous signal processing magically show up at this stage. Below is an image of the Sliding Window average. The blue signal is its input or the output of the Rectifier, and the red signal is the output of the sliding window. The red signal is also the final output of the FEP stage of processing. Since it is a window, it has a delay associated with it. Its approximately 22 samples or 44 milliseconds on average. It doesn’t always look that way because sometimes the input signal spikes are suddenly tall with smaller ringing afterwards. Other times there are some small spikes leading up to the tall spikes and that makes the sliding window average output appear inconsistent in its delay based on where the peak of the output shows up. Although these bumps are small, they are now representing where new acceleration energy is being introduced due to punches.

FEP Final Out

Detection Processor

Now it’s time to move on to the Detection Processor (DET). The FEP outputs a signal that is starting to show where the bursts of acceleration are occurring. The DET’s job will be to enhance this signal and employ an algorithm to detect where the punches are occurring.

The first stage of the DET is an attenuator. Eventually, I want to add exponential gain to the signal to really pull up the peaks, but, before doing that, it is important to once again squish down the lower values towards zero and lower the peaks to keep from generating values too large to process in the rest of the DET chain. Below is an image of the output from the attenuator stage, it looks just like the signal output from the FEP, however notice the signal level peaks were above 100 from the FEP, and now peaks are barely over 50. The vertical scale is zoomed in with the max amplitude set to 500 so you can see that there is a viable signal with punch information.

DET-Atten-Out

With the signal sufficiently attenuated, it’s time to create the magic. The Magnitude Square function is where it all comes together. The attenuated signal carries the tiny seeds from which I’ll grow towering Redwoods. Below is an image of the Mag Square output, the red signal is the attenuated input, and the blue signal is the mag square output. I’ve had to zoom out to a 3,000 max vertical, and, as you can see, the input signal almost looks flat, yet the mag square was able to pull out unmistakable peaks that will aid the detection algorithm to pick out punches. You might ask why not just use these giant peaks to detect punches. One of the reasons I’ve picked this area of the signal to analyze is to show you how the amount of acceleration can vary greatly as you can see the peak between 25,000 and 25,250 is much smaller than the surrounding peaks, which makes pure thresholding a tough chore.

DET Mag Square

Next, I decided to put a Low Pass filter to try to remove any fast changing parts of the signal since I’m looking for events that occur in the 2 to 4 Hz range. It was tough on T-Filter to create a tight low pass filter with a 0 to 5 Hz band pass as it was generating filters with over 100 taps, and I didn’t want to take that processing hit, not to mention I would then need a 64-bit accumulator to hold the sum. I relaxed the band pass with a 0 to 19 Hz range and the band stop at 100 to 250 Hz. Below is an image of the low pass filter output. The blue signal is the input, and the red signal is the delayed output. I used this image because it allows the input and output signal to be seen without interfering with each other. The delay is due to 6 sample delay of the low pass FIR, but I have also introduced a 49 sample delay to this signal so that it is aligned in the center of the 99 sample sliding window average that follows in the processing chain. So it is delayed by a total of 55 samples or 110 milliseconds. In this image, you can see the slight amplification of the slow peaks by their height and how it is smoothed as the faster changing elements are attenuated. Not a lot going on here but the signal is a little cleaner, Earl Muntz might suggest I cut the low pass filter out of the circuit, and it might very well work without it.

Low pass delayed DET

The final stage of the signal processing is a 99 sample sliding window average. I built into the sliding window average the ability to return the sample in the middle of the window each time a new value is added and that is how I produced the 49 sample delayed signal in the previous image. This is important because the detection algorithm is going to have 2 parallel signals passed into it, the output of the 99 sliding window average and the 49 sample delayed input into the sliding window average. This will perfectly align the un-averaged signal in the middle of the sliding window average. The averaged signal is used as a dynamic threshold for the detection algorithm to use in its detection processing. Here, once again, is the image of the final output from the DET.

DET Final Out

In the image, the green and yellow signals are inputs to the detection algorithm, and the blue and red are outputs. As you can see, the green signal, which is a 49 samples delayed, is aligned perfectly with the yellow 99 sliding window average peaks. The detection algorithm monitors the crossing of the yellow by the green signal. This is accomplished by both maximum and minimum start guard state that verifies the signal has moved enough in the minimum or maximum direction in relation to the yellow signal and then switches to a state that monitors the green signal for enough change in direction to declare a maximum or minimum. When the peak start occurs and it’s been at least 260ms since the last detected peak, the state switches to monitor for a new peak in the green signal and also makes the blue spike seen in the image. This is when a punch count is registered. Once a new peak has been detected, the state changes to look for the start of a new minimum. Now, if the green signal falls below the yellow by a delta of 50, the state changes to look for a new minimum of the green signal. Once the green signal minimum is declared, the state changes to start looking for the start of a new peak of the green signal, and a red spike is shown on the image when this occurs.

Again, I’ve picked this time in the recorded data because it shows how the algorithm can track the punches even during big swings in peak amplitude. What’s interesting here is if you look between the 24,750 and 25,000 time frame, you can see the red spike detected a minimum due to the little spike upward of the green signal, which means the state machine started to look for the next start of peak at that point. However, the green signal never crossed the yellow line, so the start of peak state rode the signal all the way down to the floor and waited until the cross of the yellow line just before the 25,250 mark to declare the next start of peak. Additionally, the peak at the 25,250 mark is much lower than the surrounding peaks, but it was still easily detected. Thus, the dynamic thresholding and the state machine logic allows the speed bag punch detector algorithm to “Roll with the Punches”, so to speak.

Final Thoughts

To sum up, we’ve covered a lot of ground in this article. First, the importance of fully understanding the problem as it relates to the required end item along with the domain knowledge needed to get there. Second, for a problem of this nature creating a scaffold environment to build the algorithm was imperative, and in this instance, it was the Java prototype with visual display of the signals. Third, was implement for the target environment, on a PC you have wonderful optimizing compilers for powerful CPUs with tons of cache, for a microcontroller the optimization is really left to you. Use every optimization trick you know to keep processing as quick as possible. Fourth, iterative development can help you on problems like this. Keep reworking the problem while folding in the knowledge you are learning during the development process.

When I look back on this project and think about what ultimately made me successful, I can think of two main things. Creating the right tools for the job was invaluable. Being able to see how my processing components were affecting the signal was really invaluable. Not only plotting the output signal, but having it plot in realtime, allowed me to fully understand the acceleration being generated. It was as if Nate was in the corner punching the bag, and I was watching the waveform roll in on my screen. However, the biggest factor was realizing that in the end I am looking for something that happens 2 to 4 times per second. I latched on to that and relentlessly pursued how to translate the raw incoming signal into something that would show those events. There was nothing for me to Google to find that answer. Remember knowledge doesn’t really come from books, it gets recorded in books. First, someone had to go off script and discover something and then it becomes knowledge. Apply the knowledge you have and can find, but don’t be afraid to use your imagination to try what hasn’t been tried before to solve an unsolved problem. So remember in the future, metaphorically when you come to the end of the paved road. Will you turn around looking for a road already paved ,or will you lock in the hubs and keep plowing ahead to make your own discovery. I wasn’t able to just Google how to count punches with an accelerometer, but now someone can.

CPU是如何制造出来的

每当谈到电脑手机的CPU,懂的不懂的人都会说上两句“核心技术在别人家手里”,都会感叹一下中国的芯片制造的落后。可是你真的知道一块CPU是怎么制造出来的吗?你知道CPU的制造对工艺和环境有哪些要求吗?看了下面的视频你就知道了。

http://v.youku.com/v_show/id_XMTQ0Njc2ODAwMA==.html

附注:GlobalFoundries(格罗方德)公司是世界第三大的晶圆制造企业。它的前身,是AMD的晶圆制造单位。AMD把晶圆业务拆分了,与阿联酋阿布扎比先进技术投资公司(ATIC)以及穆巴达拉发展公司(Mubadala)联合投资成立了GlobalFoundries公司。

Strawbees-吸管机器人

Strawbees是近年Maker Faire 的常客,采用吸管骨架的“小卡子”。看完视频你就会惊叹他们的创造力,在此之前,我们也想不到小小的吸管配合开源硬件,竟然能做出这么多有趣好玩的东西。玩家还可以用这些“小卡子”制造房子、风筝,等等等等。Strawbees 为每个人提供了一个支点,至于怎么撬动地球就凭你的想像了。

创元素里有些啥?

作为一个创客空间,创元素为酷爱科技、热衷实践的maker们提供了必要的工具,材料和场地,一个个脑洞大开的想法在这里变成现实。俗话说,“工欲善其事,必先利其器”,武林高手也总会有一把称手的大宝剑,那么,创元素里都有什么狂拽酷炫的工具呢?

1.专业焊接工作台

专业的焊接平台

2.组装3D打印机

组装3D打印机

3.专业电子测试仪器

专业电子测试仪器

4.激光切割

激光切割

5.搭建工作平台

搭建工作平台

6.3D打印机

3D打印机

7.小型万能机床

小型万能机床

此外,还有更多CNC、热转印机、乐高控、开源电子模块……如此多的神奇工具,如此专业的场地,创元素为热爱折腾的创客们提供了一个把突破天际的脑洞变成现实的场所。大学城的创客空间创元素,等着你来一起动手,改变世界哦!

 

 

说logo设计

有个同学说到:“我们似乎善于引进别人的东西,而对自己的好东西却不谙发散之道。” 我想为什么我之前拒绝把logo展示出来,恐怕是源于对自己的不信任。

此次logo设计是我头回接触到平面设计,我想,这或许是我成长最快的一段时光吧。
一开始知道要设计logo,兴奋却也不安。这是个未知的领域,从何入手尚未得知。师兄帮忙找了些图片让我从中找寻灵感,是的,一开始的自己是在模仿中的。如果说这段时间有得到了经验,那便是锻炼了自己运用AI的技能。
logo
logo被否了。这也是思索如何真正设计logo的开始。回想过去的一段时间,感觉自己就是在单打独斗。logo代表的是整个团队,而我只是在根据自己的喜好来设计,这样设计出来的logo怎能得到大家的认可?

整个团队就logo进行了讨论,确定了走简洁风,体现创元素的”创“。
这是极端抽象化的”创“字,说实话并没多少人在第一眼就看出来。当时很郁闷,很纠结。到底设计一个logo是真的要让人一眼就看懂,还是。。。已经混乱了。记得 周俨老师 说了,“设计是一种语言,只有自己能看懂的那就是艺术了。”或许,自己一直都没走出自我的包围圈。

最后的logo成型。logo会转化成这样其实自己也觉得很意外。现在的logo设计已经告一段落了,或许并不满意,但自己努力过了。。。
希望各位大神多多指点。。

教育、创客和未来

教育创客这个名词似乎对很多人来说还很陌生。这是由一群热衷创客情节的教育工作者首先发起并号召的。

北京景山中学的吴俊杰老师和我们一直保持着密切的联系,在去年上海的创客嘉年华上,他们发表了一次演讲,来阐述这个观念。

未来的世界是由一群我们基础教育培养出来的创客消费者们组成的,如何让孩子们更早的认识到创客,认知创客教育的重要性。很高兴在这里分享吴俊杰老师即将发表的书籍的提纲式论文《创客教育:杰客与未来消费者》,其中褐色字体部分是由创元素发起人之一@张成wust提供的观点补充

 

创客教育:杰客与未来消费者

——2014地平线报告刍议

 

学者的研究往往包含对现实的思考和对未来的预测,但是这两部分往往是不矛盾的,正像我们常常说的那样“从实践中来,到实践中去”,创客运动以及对教育的影响也是源自于上个世纪以来的一系列的实践。但是我们总是不忘记梳理这个实践的脉络并且去预测未来的各种可能性,这样做的目的是搞清楚现在该做什么、为什么要做、怎么做。因此我们描述创客运动的历程,并展望其过程,从不同的视角阐释其对社会和社会的一个单元——教育的影响和作用就不足为其了。我们总是喜欢展望未来的原因,一是可能现实太无奈,改革它需要一个足够大的理由,二是可能明确未来的可能性才能够判断脚下的路怎么走,因此,当地平线报告2014高教版当中,把创客当做其中期趋势,用很大的篇幅去描述和展望的时候,就不能不从教育和社会的视角去梳理一下已有的实践,并且对其做一个比较独特维度的解读和展望,而这个展望则可以给很多孤独实践当中的人,带来一些地平线的曙光。

 

创客教育和类科举制度的应试教育最大的区别就在于更鼓励人们在动手实践的过程中去解决实际问题,并得到能力的提升。当然,逻辑体系相对完善和封闭的书面知识体系有其固有的普遍性,这在西方近代科学的诞生过程中体现得尤为明显,书面知识的严密性让许多知识分子痴迷于其美妙的推理和演绎性质。我们应该完整而全面的看待他们,而不是把它们对立起来,在牛顿建立完善的力学体系时,我们不要忘记他是站在归纳伽利略等人斜面实验的基础之上才能在当时把宇宙都纳入统一的体系中。技能分很多种,书面资料能保留下来的往往是逻辑完整性较强的部分,而某些技能无法通过书籍传承,必须亲身实践才能掌握,而且一旦掌握,往往终身不忘,譬如游泳和骑自行车。我们的书面考评体系难以对这类能力进行考察。

 

1.杰客:伟大的人和伟大的作品

在很多的超级英雄电影当中,我们常常能够看到一个伟大的人,在全人类面临生死存亡的时候,挺身而出拯救世界。是的,我们需要伟大的人,或许人类最乐观的需要拯救自己的时刻就是在50亿年以后太阳的能量即将耗尽之时,在超新星爆炸毁掉太阳系之前逃离地球,但是令人遗憾的是,这种事情,恐怕不是一个伟大的人能够解决的,我们需要一种办法,尽可能多的产生伟大的人和伟大的作品、伟大的工作。这时,如果我们借用一个之前广被流传和尊敬的概念极客(Geek)的另一个翻译杰客,来描述伟大的人就比较形象了。中国人讲英雄豪杰,其实都是有数量上的要求的,能力超过十个人的为英,能力超过一百个人的为雄,能力超过一千个人的为豪,能力超过一万人的为杰,所以极客这一类声称要改变世界的人定义为杰客的时候,确实有了改变世界的资本——因为他们能够带领一万个人向前走啊。但是如果按照这种粗浅的定义,杰客仅仅是一个人的百分比的结果,那么人口越多的国家拥有的杰客的人数就应该越大,但这事实上是一个伪命题,因为杰客的概念是针对人类文明的共同体而言的,这就意味着总人数相同的时候,先进文明当中的杰客人数一定比落后文明当中多,就像传说中的古希腊雅典学院,中国的诸子百家的时代一样,杰客云集。但是杰客面临着一系列的困境,这些困境有的与教育有关,有的与社会环境有关,有的与自身有关,而创客运动正是要打破这种困境,使得孕育杰客的环境更好一些,杰客的区域分布更均衡一些,最重要的是,超越杰客的一般定义及万里挑一,转变为万里挑唯一,即只要你在某一方面的才能超越了其他人,就可以成为一个改变世界的杰客。但是我们不希望仅仅是像选秀节目那样,仅仅是带有娱乐性质的表演,我们希望这些杰客融入产业界,真正改变世界的人。

伟大的人有很多种,其中一类是通过伟大的作品来和世界交流,比如说毕加索、莫言、爱迪生或者格里高利·佩雷尔曼,他们工作的共同特点是创造性的作品,而这种创造性具备非常明显的个人风格。而且很多杰客的核心工作可以由他/她一个人完成,但是现代工业使得工业品的制造过程不再是一个人能够完成的,团队协作变得比任何时候都重要,但是手工业时代的质感却渐渐流失了。另一类伟大的作品比如阿凡达、iphone、阿里法塔、探月工程则需要一群人的工作,而这个复杂的工程很难想象能够有普通的几个人完成,于是有了公司,有了国家的各种专门机构,为了协作和效率完成一项伟大的作品。但是这些作品总会有一个人代表这个作品,这可能是一种“吃得苦中苦,方为人上人”的状态,但是一定要不停的往上爬,才能到达顶端么?况且“古来征战几人回”即使跋扈一时,自以为是也难以名留青史。而伟大的人也只能通过伟大的作品被记忆、被流传,正如“万里长城今犹在,不见当年秦始皇”,如果所有被记住都是伟大的作品,那么现实生活的问题就是,如何在一个伟大的时代当中,教育能够涌现出更多伟大的人创造出伟大的作品。

 

回顾早期的欧洲的自然科学,众多科学名人诞生在良好教育的贵族家庭,譬如卡文迪许、戴维、伯努利等。所以今天很多人在雷锋网上争议所谓“非富二代,无优秀创客!”之言,其实他们忘记了还有类似法拉第这样的人。也许有些人会认为这是个概率问题,我们姑且不论,但至少不能因为否定后来者通过努力和进取获得他所期望成就的可能性。

而万里挑一的杰其实是这样的问题,当年依靠军功横扫欧亚大陆的拿破仑,当你在博物馆回顾这段历史时,你看到的仅仅是画像中的他一个人还是他身后的军队。马克思主义一直强调的“群众创造历史”的价值观似乎被许多马克思主义者抛诸脑后。

至于iPhone和探月工程,应该是属于工程实践,而非科学发现类的。越是底层的基础科学,越是强调其简洁直观,但普遍适用性,越是顶层解决实际问题的工程实践,问题负责度大为增加,需要更多的人力物力,但却只是基础知识的应用。不同的科技工作者在不同层次。这点我在创元素分享会上和华南理工大学第一的学生探讨过,下面是我尝试性列出的模型,仅供参考:

1

2 3 

2.创客运动:工匠时代的演进

在谈论创客的时候,人人往往会想到可穿戴设备,互动艺术品和DIY作品,仿佛创客们玩的都是一些数字化的东西。但我们更关心的创客之间的社群属性和工作流程,创客运动正是一种改变我们获得工业品方式的运动,而这个运动的核心是工匠时代的演进。我们回顾一下古人的村落,那时村里每个人都有分工,但是总有一部分人是吃技术饭的,他们称为工匠,比如一个铁匠,他知道要制作的宝剑给谁使用,于是他会根据人的身高和力量来定制宝剑的尺寸和重量,他甚至了解哪一家的马的运动习惯,在马掌坏掉之前准备好马掌,以便及时更换。他喜欢钻研工具使用的技巧,一把敲打炽热铁块的笨重的锤子,在他手中就成了千变万化的神器,总有一些窍门以口耳相传的形式教给他的徒弟或是孩子。这种工匠时代的做法,几乎普遍的存在于传统手工业的各个侧面,染坊大妈、面馆老板、弹棉花、修鞋师傅……他们有些还生活在我们周围,有些则随着时代远去,人们只能在旅游景点意识到他们曾经亲切的生活在人们周围。流水线式的加工业,以高效低成本的工业制品,将许多传统手工业者打击的体无完肤,而传统手工业的技术细节则受到了父传子模式的限制而难以延续,因为很难保证儿子和父亲有同样的爱好,况且前景很难称得上光明。

 

手工匠人式的生产方式满足了个性化的需求,但在工业革命之后,产品同质化严重,人们强调的是生产效率。许多创客运动的信仰者们实质上是在某种程度挑战亚当斯密在《国富论》中所提出的一条经济学原理“分工产生效率”。当然,这个问题在不同人的认识中有不同的观点,他们认为“分工产生效率”是市场经济的铁律和基础,难以挑战,创客运动只是类似于朋克一样的“亚文化”,难以成为主流,或者只是某些难以大规模生产的个性服务和定制类的产品小规模经济行为。

这个问题事实上在《创客-新工业革命》一书中至少能找到一种答案,那就是社区模式。人们为了某种愿望,哪怕是小众的,团结起来形成社区,类似于大学里的社团组织。但他们目的明确,就是要做出一件产品来满足他们的需求或者他们希望满足他人的需求。这样以后,公司发生变化,不再是公司内的工程师来开发产品,然后公司把产品送到消费者手上。使用者或者说消费者本身就是产品开发者,他们的开发愿望更强烈,参与感更高,为产品买单付费也自然更乐意。

也许这听起来似乎是天方夜谭,如果放眼目前中国的情况,我个人感觉小米论坛是个这种模型的雏形,虽然小米科技还是一家公司。但可以看到的是,相比传统公司,用户已经重度参与产品的开发和改进过程中。开源模式的威力就在于,往后产品开发者不需要再去收集市场反馈,因为不存在反馈,开发者就是你们,使用者也是你们。

但是有人对使用者的参与度可能会分为几个级别,这是和国民整体素质和专业知识水平密切相关,譬如对于同样一个问题,可能会分为以下几种在社区中讨论的说法:

1、这台风扇用的时候总感觉睡不好觉

2、这台风扇在使用过程中会发出频率令人不快的噪音

3、这台风扇会发出人听阈内最敏感的噪音,依据我的专业判断,是轴承的问题

4、这台风扇会发出人听阈内最敏感的噪音,依据我的设计经验,应该更换为另外一种轴承

5、这台风扇会发出人听阈内最敏感的噪音,依据我的设计经验,应该更换为另外一种轴承,后面附有相关类型轴承的报价对照,证明在成本不增加的情况下,却能提高用户体验

那么很明显的一点就是,用户从一个不能清晰描述问题症结所在,到找到病灶,乃至于提出解决方案,甚至论证解决方案的可行性和经济性。这需要我们的教育付出多么大的努力,来提升整体国民的认知水平。

可以畅想,将来的经济繁荣一定属于那些拥有高水平,红红火火开源社区的国家和地区,知识和技能在当中发挥的作用越来越大。因为他们的产品性能好,质量高,更符合消费者需求,而且不会因为出现垄断和专利保护导致价格居高不下。

 

 

而人类发明的大班的学校则是期望解决这个问题,比如复旦大学设计系的一位老师请一位制作蜡像的师傅给学生将蜡像制作,最期望“偷师”的窍门是如何让蜡像皮肤上的汗毛逼真的放上去,虽然谜底是简单的,但是这个钻研的结果,如果没有学校,普通人一生都可能很难悟到。有了互联网之后,在全球最大的制作教程分享网站instructable.com上有成千上万的人分享自己制作物品的诀窍,你甚至可以学习如何在不破坏表皮的情况下将香蕉分成几节,这样就可以完成一个恶作剧的工具:当别人兴冲冲的剥开香蕉的时候,却眼睁睁的看到断成几节的香蕉块从眼前掉落。这些有用或没用的技巧被全球的创客、发烧友、普通人分享和改写着,分享精神、互联网和开源硬件的热潮则驱动着工匠时代的演进的结果——数字匠人的诞生。之所以称之为演进,说明数字匠人与传统手工业者有联系和区别,就是一系列数字匠人是什么,不是什么的问题。

数字匠人保留了传统手工业者对产品质感的要求,不是粗糙的、容易损害的工业品,有的时候工业品的生产者甚至故意让它的产品容易损坏,这一方面是为了让消费者买新的促进销售,另一方面也是为了避免因产品老化造成伤害事件所引起的法律纠纷。

数字匠人保留了传统手工业者对用户的关怀,量身定做成为了一种基本的标准,而不是像衣服一样分成几个简单的尺码,就认为能够适合所有人,事实上很多人胸部是S号,而腹部却是L号。

数字匠人保留了对工艺的追求,不断地研发新的技巧,只不过舍弃了工匠时代对窍门的保密的态度,他们将这些技巧分享出来,因为他们知道分享和写作将是下一个时代构建伟大作品的唯一方式。

数字匠人保留了与客户之间社群网络式的联系,只不过不局限在区域的地理范围内,除了服务邻里,为认识的人服务之外,通过网络寻求世界范围内的客户,将成为一个普遍的模式,比如klickstar的成功。拼写错误,应该是kickstarter

数字匠人能够玩转数字技术包括数字媒体,界面编程,现代物流,硬件用户体验的全部技术,他们擅长某一项并且知道如何找到更擅长其他方面的人形成一个项目团队,这与传统的闷在实验室里面盯着示波器的技术工程师的形象差别很大。

数字匠人成为杰客的方式并不是建立一家伟大的公司或者机构甚至产品,他们追求的更多的是声望,在某一个领域当中能够影响一万人的实际效果,而这种声望是建立在先天的禀赋,对他人需求的敏感,个人的品味及对作品细节的追求基础之上的,结合成本与产出之间的剪刀差,使得数字匠人的产生不必特别依赖于你的父亲是谁、你的职位是什么、你在哪里受过教育、你有多少钱,而更多的依赖于禀赋、品德、毅力等个人能够把控的因素。

数字匠人团队的组织形式是一种扁平化的、项目工作制的协作方式而不是传统的公司形式,事实上很多创客团队在成为一家公司的过程中,都经历了管理层阵痛,原本扁平的关系不得不金字塔化,有些很好的团队甚至在这个过程中分崩离析,即使成功,也不免产生高处不胜寒的唏嘘。数字匠人组织形式的完善将避免,公司形式对人性的束缚。

数字匠人的核心资产是一种能力,一种自身的核心能力和朋友圈子的力量而不是版权和所有权,传播速度、使用权和服务质量将会变成影响数字匠人成功的核心因素,而构建这些因素的能力则会成为数字匠人的核心资产。

 

一个创客文化、社区文化和开源商业模式对比传统商业模式中常常提出质疑的一个问题是,我们为什么需要公司,我们为什么要在公司里工作?是公司存在的意义在于其效率更高还是公司能更好的保障股东和员工利益。回顾资本主义发展历程,有限责任公司发轫与荷兰,与现代银行,股票交易等一系列金融制度挂钩。但这一历程的出现体现了资本在市场经济中的主导和支配地位,不过我们不要忘记其历史背景,就是大部分经济活动来自于航海贸易和实体的手工作坊式生产,因为这一制度诞生时,还没有完成工业革命。这些活动都需要重资本的资金支持,并且是冒着极大的风险。工业革命以后,机器和设备似乎比人重要,市场运行机制也慢慢被人们摸索出一些规律,但这个时候,资本仍然重要。直到第三次工业革命,尤其是其中的信息技术革命,使得几乎每个只要有电脑和互联网的人都能平等的发挥自身的努力和才能,这一革命性工具的出现,在某种程度上彻底颠覆了有资本才能有机会有可能的模式。看看硅谷的成功,无不是技术和人才开始反过来吸附资本和资源,而如果我们坚信知识能更高于资本把把握社会运行脉搏,又或者说强调人的价值高于资本的价值的经济模式和产品服务得以出现。之前的回答过程中就能再次强化社区的作用,社区集收集想法、分配技能实现、降低风险、营销和反馈改进产品于一体。如果逐渐出现社区企业能以更低成本高效率运营,打败传统流程管理模式企业的例子,那么我们就越来越有理由相信开源会打败专利,社区会取代公司。

 

 

3.人人都是创客?

既然数字匠人群体的涌现代表着创客运动中产生杰客和伟大的产品的一种方式,那么在创客运动中是不是人人都必须要成为创客,成为创客又意味着什么。这是一个很存疑的问题,在现阶段,小的创新团队和数字匠人的概念是模糊的,但是在未来这种区别会愈加明显。对于创客的分类显得比较重要,任何分类就像观察一个量子化的纠缠态一样,复杂的现象会塌缩为简单的几个可观测量,但是不能穷尽事物的全部,但是有助于我们的认识“从片面到认识更多的方面”。

比特创客:这一类创客是在互联网产生之后,自拍DV和撰写博客的创客,他们出现的最早,一开始就有很强的自媒体意识,提供内容和提供平台的结合使得像博客网站、小说分享网站、微信公共账号迅速的流行起来,这里面都有比特创客的功劳,只不过他们其中很多人并不知道自己被归类为比特创客。

电子创客:这一类创客最初以电器工程师和电子类大学生为主,在arduino社区的建立之后,大量的设计师、艺术家、爱好者和普通人加入进来,使得电子创客成为一个很大的门类。很多电子创客执着于建立一个产品原型,并且通过众筹的形式转化为产品,但是重要的仍然是这个主意的分享和开源,因为“分享给他们带来快乐和潜在的商业机会,但是他们并不痴迷于这种机会”

原子创客:原子创客的代表产品是3D打印机,这个世界上最重要的一种物理变化称之为物态变化,比如钢熔化之后钢水通过磨具浇筑成车轮、塑料丝熔化之后通过用arduino控制位置的3D打印探头逐层打印成事先设计好的物品。熔化和凝固的过程中,需要能量,不损耗质量,而且制成了有实际用途的产品。产品失去使用价值之后可以被重新熔化加工为塑料丝反复使用。3D打印使得原子创客这类关心电子世界和现实物品之间联系的群体有了自己的名片,而这张名片正从原型设计传递到产品制造领域,桌面加工业对未来工作者的需求的变化,需要教育做相应的改变。而这种改变的方式即敏捷方法,则是一个比创客更难实现的长期趋势出现在2014年地平线报告的高等教育版本当中。

工艺创客:工艺创客执着于传统手工业者的方式,因为传统手工业亲近自然、亲近人的方式,始终具有独立的魅力,因此工艺创客将编织、种植、漂染、丝网印刷这些传统的技术以NGO或者艺术品的形式传播出来,会始终有着独特的魅力。而工艺的探究过程则是当前技术教育难以解决的问题,工艺创客的许多工作方法和思维方式是值得技术教育研究者所借鉴的。

基因创客:伴随着基因测序和基因干预设备价格的下降、速度的提升和技术的开源,基因创客正在以非常迅速和有力的速度加入到创客家族当中来。以国内创客圈子标杆式的团队上海新车间为例,他们在2012年进入生物领域,先后的鱼菜共生项目和荧光大肠杆菌的转基因项目,都展现了未来这个领域的丰富可能性。如果要将创客的各种分类在技术上竖立两座高塔的话,一定是电子创客和基因创客,因为电子的趋势是研究人类智能的形成机制,而基因创客是研究人类生命的形成机制。而原子创客和工艺创客更偏工程、下面的行走创客和教育创客更偏社会。

行走创客:行走创客是一类志愿者团体或者个人行动派,比如一公斤电子的旅行支教项目,或者某些人的搭车去欧洲的“壮游”,行走创客往往和比特创客结合,以自媒体的形式记录和影响社会。广义的行走创客是一种在路上的状态,很多想做一些事情,服务一些人,改变一些东西的人,如果他们以创客的形式集结,都可以称作行走创客。

教育创客:教育创客是有志于服务教育的创客,或者期望将创客的作品转换为课程的教师。他们了解现行的教育体系的运行机制和不足,并且以课程的方式扎根实践,在自己的职权范围内,推进创客教育,启迪智慧点亮人生是他们的工作,但更重要的是教育市场包容了一些创客中的梦想家,他们可以东做做、西做做,教教课,始终保持着幼儿园一般的欢乐和创造力,而创客教育也让一些教师找到了事业的新方向的同时做一些好玩的东西成了他们的休息方式,在和学生分享的过程中放大了这种欢乐,让教育生命舒展而茁壮。

那么我们回到问题的开始,人人都是创客么?根据各种创客的分类,给人一种“高端大气上档次”的感觉,但是在中国这样一个有两亿外来务工群体、六千万留守儿童的国家,呼吁人人都是创客是否有些底气不足?事实上,人人都希望是创客,因为人人都期望过上好的有尊严甚至受人尊敬的生活,创客运动的愿景给人们展现了人生之为人的理想状态,——“用自己的方式去度过一生”,这超越了强权、金钱、浮名带来的外在的满足,而创客运动则是要推动人人都是创客,因为只有人人都是创客,人才能从外在的目的当中解脱出来,找到人的自由和丰满。那么人人何以均为创客,首先要从人人都是消费者谈起。

4.人人都是消费者

人人都是创客距离现实还有一定的距离,但是人人都是消费者确实不争的事实。让我们来反思一下消费的现状,看看消费者的消费习惯是怎样的,创客运动对于消费文化会有怎样的影响。目前的消费文化是一种扔掉经济学的发展模式,人们追求更新更便宜的东西,手机要经常换一换,东西要在淘宝上货比三家再买,这本无可厚非,但是这种消费文化对应一系列的产品文化。

注重外观但并不一定注重内里。一个品牌移动电源可能会使用iphone一样的金属质地的盒子,但是却不愿意使用更加耐用更安全的锂电池。

注重宣传而并不注重服务。人们总是习惯性的被铺天盖地的广告所轰炸,但是一旦我们购买了产品,却很难得到后续的服务,低廉的价格甚至让我们连去修理它的理由都不复存在,事实上,很多厂家也希望消费者扔掉坏的东西,而不去修理,因为修理往往要外包,每修理一件产品,厂家还需要另外付费。

注重勾引式的营销缺乏价值引导。很多网络游戏打着免费的旗号,号召用户进来,背后的目的是买装备,流传着一个相当极端的例子,网游公司甚至鼓动了上百名枪手,来陪两个有钱人玩游戏,在旁边煽风点火,勾起网络群殴,这两个人为了网络中的虚荣,花费了上百万购买装备,这其实跟诈骗在道义上没有太大的区别。缺失价值引导的消费文化,造成了正能量的虚弱和一些领域的畸形竞争。

注重冲动心理缺乏公信力的评价。粉丝经济学告诉我们营造一个神,让所有人相信这个神,然后就可以数钱了。缺乏公信力的第三方评价,使得人们仅能通过价格外观来选购产品,尤其在食品工业领域,为千里之外的不认识的消费者生产食品,缺乏有效监管的市场使得我们甚至怀念农民拎一个篮子进城卖鸡蛋的时代。

而创客空间作为一个民间智力的集合体,一个扁平化的产生知识、产品和服务的地方,在现行的消费文化当中,则可以担当产品细节的解剖者、维修文化的引导者、具有公信力的第三方机构的形式出现,引导消费文化走向正途。而我们的教育在引导什么样的消费呢?所有的教育都在引导引导节约、物尽其用,但是过于功利化,强调应试效率和结果导向的教育文化实际上是和急功近利的扔掉经济学的模式一样的,忽视理性,引导超前消费和奢靡之风,表面文章必然导致教育的结果——人才不能够满足社会发展的需求。

5.未来的消费者

消费文化关乎一个国家的未来,德国能够在欧债危机当中起到欧洲稳定剂的作用,与其保守和务实的消费文化密切相关。而未来的消费者决定着未来的产品模式,创客能够引起主流文化的重视,并成为教育发展的一种趋势,一个重要的力量是“天使消费者”和众筹模式的加入。众筹是筹集大家的钱完成一个梦想的一种方式,比如2014年地平线报告当中所引述的“康奈尔大学(CornellUniversity)的一个学生正在使用Kickstarter来开发Kickstat,旨在载入一颗近地轨道的小型航天器。”众筹的发起人,并不是一个空想家,更类似与一个营销梦想的设计师,他将想法拍摄成一个视频,告诉大家我想做什么,已经做了什么,然后公布一个财务计划,给不同级别的支持者以不同的礼物回馈,发起人会设定一个梦想启动的金额,当达到这个金额的时候,自动启动该项目,在一定的时间完成,并给与支持者回报。有些众筹项目是要制作一个创意的产品,其回报是用接近成本价的价格购买该产品,这样创客团队实际上是和消费者共同承担了创意产品的成本,参与众筹的消费者,提前预购了产品,接受了可能最终项目失败拿不到回报的风险,容忍了项目拖延可能造成的交货推迟,因此他们对于创客团队来说就像是包容他们的天使一样,因此他们被称为天使消费者。天使消费者和创客之间的角色是常常转换的,在众筹网站“点名时间”上我们能够看到每一个用户发起了哪些众筹项目并支持了哪些众筹项目。天使消费者给了我们未来消费者的一种可能的模型,因为如果未来消费者仍然遵循“扔掉经济学”选购产品,那么我们只能在市场上找到一些花架子,或者糊里糊涂的成为消费陷阱当中的困兽。未来消费者的特征有

未来的消费者是产品功能的设置者。智能家电的发展趋势是中央控制统一化,交互界面标准化,类似英特尔的爱迪生计划一样,SD卡大小的电脑主机将会成为很多智能设备的控制中心,这时智能空调、智能冰箱、智能洗衣机使用的都是同一个可编程系统,这意味着智能家电将是可重用和可改写的,可重用意味着智能暖风机的芯片当冬天过去可以放在智能风扇当中,可改写意味着用户本身可以改写智能家具的界面和各种参数,这时消费者并不仅仅是技术的崇拜者和花钱买单的人,而真正的成为智能设备的主人。

未来的消费者同时是创意产品的创造者。劳动力市场将变得扁平化,小的团队或者项目将成主流的工作方式,为了实现这一点,世界范围内的开放教育将变得非常重要,教育使得技术不再是一部分人占用另一部分人的劳动力资源或自然资源获取剪刀差的手段而编程人获得人生幸福和自我实现的一种手段。

未来的消费者购买它熟悉的人的产品。社群网络使得每个人的各个侧面全面的展现在消费者面前,信用和已有的贡献变得前所未有的重要,所以无论是大宗的交易还是零散的购买,消费者都会知道是一个怎样的人生产了它,因此诚信和服务变得比以往任何时候都重要。消费者购买物品的时候会考虑这个购买行为对生产者产生的影响和社会效益,因此“天使消费者”会越来越多。

以上三个基本特征分别描述了在未来消费者模型当中消费者于产品的关系,制造者眼中的消费者和消费者眼中的制造者,获取产品不再是人们满足物质需求的一种行为,生产和消费的过程更像是一个结识新朋友和完善自身的过程。那么谁是引领未来消费的那个杰客?

6.杰客与未来消费者的关系

如果用万里挑一的尺度来衡量,任何一个时代都有杰客。在战争年代可能是力挽狂澜的将军,在资本年代可能是托拉斯的魁首,杰客简单来讲便是引领这个时代的人,被时代充分认可的人,有怎样的消费者就有怎样的杰客。消费者就像是抬轿子人,他们抬起了杰客。

扔掉经济学模式下的消费者抬起的杰客一定是一个不断降低产品成本的高手,可能带来的后果是压榨工人,产品使用周期的下降和消费者的隐性损失。最重要的是所有的消费者产生的垃圾在没有一个循环机制的情况下一定会是一个随时会引爆的炸弹,给环境带来致命的危害。

盲目的技术崇拜下的消费者抬起的杰客一定是一个电影明星式的技术高手,可能带来的后果是强迫消费者不断的升级、换代,更大的危险在于数据的垄断,而专利则成为一部分人挟持另一部分人的工具,甚至专利本身也会给杰客带来伤害,因为躺在功劳簿上收钱,总不是一种创造性的生活方式。

而人人都是创客的未来消费者抬起的杰客则一定是一个广为流传的开源方案的贡献者,开源意味着知识共有,在知识经济的时代,相对于强权和资本,知识终于找到了一个方式与之抗衡,取胜的方式很简单:知识原本就是每一个人都可以掌握的,知识强权和资本绑架了知识,使其为维持不平等状态的工具。而开源意味着知识的公有制,这种公有制不是在某一个国家,而是属于全人类。

而未来消费者群体当中的杰客收获的更多是声望,而非物质享受。创客教育作为一种每个人都有必要接受的教育,本质上是未来消费者教育,只有这个基础上的创客教育才是每个人都有必要接受的,而消费者决定着未来的走向,教育的作用似乎从来没有这么大过。

7.创客教育:桌面加工业的挑战

现代班级授课制的基础是大工业生产的需求,第一第二产业需要的劳动力数目的下降是教育显得捉襟见肘的一个原因,而桌面加工业的兴起则是压垮班级授课制的最后一根稻草。桌面加工业意味着教育必须培养一个可以玩转设计、生产、营销、物流的全才,至少他应该可以指导在哪里可以找到协助他工作的人。这并不是畅想一个未来的计划,目前的技术手段已经能够看到这种模式的雏形。开源硬件microduino的尺寸和价格,使得一个电子产品单独印刷一个电路板变得没有必要;3D打印机精度的提升和独特肌理及工艺的研究,使得给电路板开一个模具变得没有必要;Makeblock这种积木式的可以构建类似小型流水线装置的开源结构件,使得购买专门的机器变得没有必要,况且使用完之后还可以分零件卖给他人,只需要付少量的折旧费用;电子商务,自媒体营销,现代物流使得传统公司的很多部门变得没有必要。那什么才是必要的呢?有必要的是培养一个人,一个能够发现别人需求、有一定的品味完成设计和营销、并且掌握价值规律的人。而我们很遗憾的发现,现有的班级授课制的教育体制,很难培养这样的人。而游离于教育体制之外,或者处于大学社团状态的创客空间似乎可以,创客空间非常类似于经典的学习科学理论学者十分崇敬的“桑巴舞学校”式的学习环境,不同年龄和背景的人在一块儿学习和协作,伴随着简单的规则,生成复杂的内容,创生新的知识。因此,2014年度的地平线报告将保持教育的适切性作为高等教育技术采用最为严峻的挑战,因为在桌面加工业和数字匠人的眼中看来,衡量一个人才的初始标签将不是你从哪个大学毕业,而是你属于哪个创客空间,跟随谁学习,做过哪些项目,有什么作品。

8.学校即社会,教育即生活

学校即社会,教育即生活是杜威的社会愿景和教育理想,而这个理想正通过创客运动而慢慢实现,技术的发展自有其目的,走向自我毁灭和走向自我解放,都有其解释的合理性,其关键在教育在培养什么样的未来的消费者。理想的教室环境是一种颠倒课堂和基于项目的学习相复合的模式,教室周围是电子图书馆,学生可以根据精选视频和社区资料自学,教师负责组织标准化考试以判断基本知识和技能的落实情况,而教室的中心则是讨论桌、加工工具、实验仪器和展示平台,学生基于具体的项目,解决真实情境下的问题,在协作中综合运动知识,提升情商,了解自身,学会生活,明确人生定位,寻求巅峰体验和自我实现。在地平线报告当中,将翻转课堂视为教育中教育技术的重要进展,将会在1年之内得以采纳,并且在线学习的演化视作为长期趋势,而扩大教学规模则视为严峻的挑战,在创客教育的视角当中,服务于所有人的优质的在线学习可以视为一种培养创客的方式。而创客空间正具有这种理想的教师环境的基因,目前欠缺的是颠倒课堂的资源和标准化的考试,但从长远来看根本上欠缺的还是组织学习环境的教师的工作方式和方法的转变。钱学森于1993107日给钱学敏的一封信,谈到18岁的硕士是“大成智慧教育的硕士”。他写道:“我在这几天又在想中国21世纪的教育是要人人大学毕业成硕士,18岁的硕士,但什么样的硕士?现在我想是大成智慧学的硕士。具体讲:①熟悉科学技术的体系,熟悉马克思主义哲学;②理、工、文、艺结合,有智慧;②熟悉信息网络,善于用电子计算机处理知识。这样的人是全才。我们从西方文艺复兴时期的全才伟人,走到19世纪中叶的理、工、文、艺分家的专家教育;再走到20世纪40年代的理工结合加文、艺的教育体制;再走到今天的理工文(理、工、加社科)结合的萌芽。21世纪我们又回到像西方文艺复兴时期的全才了;但有一个不同:21世纪的全才并不否定专家,只是他,这位全才,大约只需一个星期的学习和锻炼就可以从一个专业转入另一个不同的专业。这是全与专的辩证统一。”如果我们从创客运动的角度重新审视大成硕士的观点,会发现18岁的大成硕士一定是在创客空间中培养出来的,而且我们欣喜的看到,北京创客空间,上海新车间都有这样的大成硕士,有的还不到18岁。大成大乐,我们不是在以年龄论英雄试图造就一个教育神话,而是在18岁这个充满创造力和青春干劲的年龄,很多人就掌握了足以改变世界的本领,而且终生都在学习,这本身就是社会之大乐,这种终生幼儿园式的生活是多么的令人向往,而且目前并不遥远。

最后回到一个物理问题上来,随着技术的进步,人们必然会解决能源问题,意味着电力将是免费的,而各种材料的3D打印和回收代表着物质循环的问题得到了解决,那么在物质和能量层面上,人类获得了解放,那么下一步是干什么?恐怕是知识的天下为公,只不过先后顺序可能会颠倒一下。

北京景山学校吴俊杰 2014424日 于自缚居

 

蒜泥空间的机器人

一直在广深两地创客圈混迹的我(@张成wust),对内地的创客发展情况一向不是很明了,不过至少在我本科呆在武汉的四年时间里,我觉得内陆地区的创客氛围不是很浓厚。

不过这次在2014深圳制汇节(Make Faire)遇到了杨总,让我幡然悔悟,其实中国应该有很多潜藏创客,只是一直没有露脸,甚至不知道创客这个称呼而已,而他们在做创客做的事情,已经不是一两年了。

据闻他们本科就开始做这个机器人,一直陆陆续续在改进,现在已经是第五代,也是一个持续不断做了三年多的项目。这不由让我想起了@罗振宇的“死磕自个”之说法。

这次,在经过杨总允许后,我把蒜泥车间打造的一些资料转载和创元素的朋友们分享,蒜泥车间是位于西安的创客空间,最初由西安电子科技大学的几位同学发起。

是不是很拉风呢?O(∩_∩)O哈哈哈~

不仅有图有真相,有视频有动力,还有杨总给的一份PDF介绍,精彩岂容错过。

啊!~让我翻版一下老马(克思)的一句话:全中国机器人爱好者团结起来!....此处省略3万字
 

神念的初心

在刚刚过去的Maker Faire ShenZhen 2014上,小编Ted被意念控制飞行器深深吸引,戳图->2

 

用自己的脑电波控制无人飞行器,想想都兴奋!好玩!

本次展会上还有利用脑电波玩专注力测试的小游戏,小编也觉得很好玩。

戳图->1

两者的共同点都是采用了先进的脑电波ECG传感器完成系统设计的,那什么是脑电波ECG传感器呢?小编和一些朋友交流,他们说现在大多数采用了一家叫神念NeuroSky的脑电波传感器。嘿嘿,很神奇吧!

3

 

大伙想不想看看其芯片内部结构呢?哈哈,Ted找了相类似的资料给大家分享下,注意是相类似。

45

 

其中,右图是整块芯片的裸图,左图展示的是原理结构图,依次是:

DA差分放大器;PGA可编程增益放大器;Filter滤波器;ADC数模转换器;DSP数字处理器;RF和傍边的三角形就是小编现在主修的无线射频通信模块和天线啦。

小编现在只能很初略地给大家解释下神念工作的原理,班门弄斧啦哈:

脑电波很微弱,经过芯片低噪声放大器对脑信号的放大,将经过采样数字化得到的信号放入处理单元处理就可以展示出我们的意念啦!

哈哈,是不是很神奇,何为神念?此为神念!额···好吧,表拉黑,这只是简单到不能再简单的笼统说法。说说Ted认为的两个关键点吧:

  1. 低噪放大器很关键:人脑电信号的微弱是的放大过程中极易被噪声所淹没,低噪放的性能影响着“神念”的初始样貌;
  2. 处理单元的算法设计是核心:得到的数字化0101看不出什么,这需要设计者设计出一种可靠的方法去理解“神念”的意志;

最后跟大家侃一下大山,原来神念公司的CEO是华人Stanley Yang(杨士玉),UCB的高材生喔!

贴出一个小编很喜欢的TedTalk:

人性化的科技時代:楊士玉 (Stanley Yang) at TEDxTaipei 2013

http://www.youtube.com/watch?v=Z9GhIRkOKG0(首先你得有个翻墙工具···)

小编听完Stanley最后的小故事很受感动TAT,或许这就是神念的初心吧:

一位妈妈打电话找到Stanley问能不能找到一种科技可以让她和生出来就是个植物人的儿子沟通,她的这个儿子已经21岁了,除了脑子能动,其他什么都不能表达不能了。

后来Stanley给他们带来了“神念”,当然“神念”只能读懂儿子的Yes or No。

妈妈问:喜欢吃汉堡吗?

儿子通过“神念“表达:yes。

妈妈问:喜欢看电视吗?

儿子通过”神念“表达:No。

妈妈问:你知不知道我是你的妈妈?

儿子:Yes!

然后Stanley问他:Mike,Do U Love your Mother?

儿子:Yes ! Yes !  Yes ! Yes !

妈妈哭了,这是21年以来,妈妈第一次听见的儿子深情的表白:妈妈,我爱你!

科技创客不仅仅是technology and make something awsome,我们更加关注的是源于人类内在的人性,more human insight! 谢谢!

PS:有机会看看这个视频,Ted看到最后是哭了···

 

开源运动的新纪元

——————从软件到电子,从信息技术延伸到产品开发制造的开源运动

 

专利制度的起源

专利制度的雏型萌芽于中世纪的欧洲。15世纪中期,出现了以封建君主政府以特许,授予一些商人或工匠的某项技术已独家经营的垄断权。1474年威尼斯制定了世界上第一部《专利法》。为了吸引和鼓励发明创造,威尼斯授予著名科学家伽利略发明的“扬水灌溉机”20年的专利权。《威尼斯专利法》为现代专利法律制度奠定了基础,是现代专利的雏型。

世界第一部完整的专利法诞生于工业革命前夕的英国,这就是英国1623年颁布的《垄断法》。该法被视为现代资本主义专利法的始祖,已初步具备了现代专利法的基本要素。英国早期的专利制度,如同一张巨大而细密的网,将世界上最优秀的人才和技术都搜罗到了英国。

其他国家在本国工业革命前夕,纷纷效法英国,相继建立了专利制度。美国1787年宪法颁布,美国人第一次把专利权写入了宪法,用国家的根本大法来保护发明创造。如今在美国商务部的大门口上还刻有林肯总统的一句话:“专利制度就是将利益的燃料添加到天才之火上。”

专利制度的是与非

现代专利制度制定的初衷是提供给发明者在一定期限内保护自己想法的法律措施,将聪明才智纳入实用轨道的各种机制,这样做被认为可以鼓励创新。这一点在专利 制度诞生的初期确实让这些保护知识产权的国家吸引了大批优秀的人才,并且使社会形成了尊重知识的风气。最为典型的例子当属发明万能蒸汽机的詹姆斯瓦特,瓦 特早年生活非常艰难,还抚养了数个孩子,但晚年的瓦特非常富庶,凭借的就是被到大量转让万能蒸汽机的使用权。

现代专利制度诞生至今近500年,不可否认它在一定程度上对于创新起到了激励和推动作用。但随着时间的推移,信息和医药等新兴技术的发展,在这些技术发展迅速的领域,传统的专利制度在实践过程中渐渐表现得偏离了原来的制定者的初衷,其副作用也越来越明显。简要归纳为以下几点:

1、专利的申请者需要花费大量的时间、金钱来申请专利,甚至还要请律师帮忙,在整个专利数十年的保护期内,他们仍需要花费金钱和时间来继续维护专利,通过向法院起诉所怀疑的侵权者。对于国家而言,它也要花费大量的金钱和司法资源来鉴定、保护、处理各种与专利权相关的事务。

2、 科技研究者个人,匆匆将自己的想法申请专利。为了防止在申请过程中出现可能的竞争者,往往在申请专利的一年半载期间,将技术或者想法对外保密,这阻止了技 术人员之间正常的交流,造成人为的技术封锁,在技术高速发展的时代,很可能当你获得专利权的时候,你所申请专利的技术已经失去了使用价值。

3、 专利的经济学色彩非常浓厚,趋利与垄断是专利的本质属性。很多的大公司利用专利制度来巩固自己的垄断地位,以获得高额而稳定的利润,破坏了市场竞争的公 平。那些拥有相当经济价值的专利要么被大公司抢先申请,要么被他们通过高价在申请者个人手上购买后,就成为推动自己垄断地位形成的工具。

计算机诞生初期的开放

早在上个世纪60、70年代的时候,计算机还是各个大学、研究所、大公司才能拥有的庞然大物时,这些机器上面普遍运行着的是一个被称为Unix的操作系统。Unix是由AT&T公司开发的一款优秀的操作系统,你虽然在使用的时候需要向AT&T公 司支付许可费用,但那个时候的代码从技术上来说是开源的,因为你可以获得它,甚至修改它。在那个时代,使用计算机的往往极少数由于工作等原因而接触到计算 机的工作者,他们都是熟知计算机工作原理的骇客。软件在一些计算机爱好者组成的社团之间相互分发,而且往往是免费的,这样他们就可以在别人的软件上面略作 改动以开发出适应自己需要的软件。

其中较为著名的一件事件就是1976年,“佳酿电脑俱乐部”的两位青年在汽车车库里以开源拼装的方式造出了世界上第一台个人电脑—————苹果I。当初的苹果电脑,完全是DIY爱好者用芯片拼接起来的,目的是以满足那些白天在工作中使用单位计算机,下班回家后还想使用电脑的人。但今天的苹果公司,创造的却是世界上最大的闭源生态系统。

所以,现在称为开源世界的最初想法并非一种全新的东西,而是在闭源的版权软件破坏了整个社会风气之后,一场拨乱反正的运动。

软件领域专利制度的推广-版权(copyright

上世纪约70~80年代,随着电脑价格的下降,越来越多的人在业余而非工作需要开始接触到个人电脑,在那个时候,“佳酿电脑俱乐部”的众多会员仍然保持着自由分发软件的习惯。当时的微软联合创始人比尔·盖茨给“佳酿电脑俱乐部”写了一封公开信,阐述了他在软件产权领域的观点,以下是他的主要观点:

在爱好者中当前最为严峻的问题是,缺乏好的软件和教程,如果不懂编程的计算机用户没有好的软件,他的计算机资源就是一种浪费。业余爱好者难以写出高质量的 软件,软件为什么不能出售牟利,因为你们大多数人盗用了别人的劳动成果,硬件需要付费而软件却可以免费共享。那些为软件而开发工作的人却没有应得的回报, 这公平吗?谁会拿没有收入的工作作为自己的职业?这种社团里相互对内对外派发免费软件的方式不仅让劳动者无法获得回报,还阻碍了优秀软件的诞生。欢迎任何 愿意购买软件的人提建议和我联系。

众所周知,此后的微软渐渐走上了以闭源方式开发和出售软件的道路,并支配着当今大多数个人电脑的操作系统。如今的软件公司也大多追随微软,采用同样的盈利 模式来开发应用软件,刻录在光盘上并且出售给普通消费者。他们认为软件的算法和源代码是公司赖以生存的机密,是公司财富的一部分,为了保持自己的领先地 位,决不允许不必要的人能够获得,即使是自己的开发人员,也可能只能看到和自己开发工作相关的一部分。

 

开源软件与开源组织的形成

查理·斯托曼是一位计算机爱好者,上个世纪70年 代他在麻省理工的人工智能实验室工作。他加入了一个计算机骇客社团,这个社团保持着和“佳酿电脑俱乐部”类似的运作模式。但后来他渐渐接触到有产权的商业 软件,闭源软件令他感到不安,这让继续开发和完善软件的愿望被限制,人们无法从软件公司获得源代码以便做些修改来适应自己的需求,即使这样的修改是有益 的。让我们来听听他的一些阐述:

那(商业软件)把我推到了一个道德上两难的境地,当你得到了一个授权的操作系统,你就必须和开发者签署一个所谓的协议,你不能和其他人共享软件,相反用户 却受到软件开发公司的支配和限制。对我来说,这个协议的本质就是要我去做坏蛋,去背叛世界上的其他人。把我从社会,从一个合作的团体中分割出来。我体验过 这种待遇,当别人拒绝和我们分享时候的感受,因为他们也签署了类似的协议。这在阻挠我们做某些有益的工作,所以软件版权的理念是错误的,我们要的不是这样 的生活方式。

在幼儿园的时候,老师就教育我们,当我们获得了一些糖,我们就应该和同伴分享,而不是独吞。分享是一种好的社会风气,我们需要这种好的社会风气。

被称为开源软件之父的查理斯托曼辞掉了当时在麻省理工的工作,创立了自由软件协会(FSF/Free Software Foundation),并公开发表声明,号召有志之士参与联合开发GUN项目。这个项目的目的就是编写一个又一个新的程序以替代Unix系统上应用程序。只有一点是不同的,这些软件都是开源的,你可以轻易的从互联网获得他们的源代码并且做出修改。当他们把文本编辑器、C编译器、调试器、E-mail收发程序等一系列必备软件都开发完成以后,他们开始尝试项目中最难的一个,开发一个类Unix的操作系统,以完全替代Unix,让之前的程序不需要再在版权软件的操作系统上运行,以构建完整的开源软件计算机软件系统。不过在这一过程中,据查理·斯托曼自己后来回顾,他们刚开始的思路可能不是很恰当,导致了后来调试的困难,进度缓慢。而在大洋彼岸的芬兰,在赫尔辛基大学的学生林纳斯·托瓦兹却也在做同样的事情,不过因为他的进度更快,当他把自己所写的操作系统的源代码释放到网上供大家免费下载的时候,许多电脑爱好者才发现,这正好是GUN项目中空白的操作系统。这样,一个完整的开源软件系统就完成了,它就是我们今天熟知的Linux操作系统。

 

开源社区的形成和“反版权协议”的使用

首先,Linux只是一个操作系统的内核,并且时至今日已经成为移动平台操作系统标准安卓的内核。这一令人惊叹的操作系统是由开源社区的程序员共同开发出来的。

开源社区一般由拥有共同兴趣爱好的人所组成,通常以类似论坛的形式出现,根据相应的开源软件许可证协议公布软件源代码,同时提供一个自由学习交流的空间。 由于开放源码软件由这些分散在世界各地的程序员类似于蚁工一般的点滴贡献来开发新程序,因此开源社区在推动开源软件发展的过程中起着巨大的作用。

开源运动的元老级人物之一的埃里克·雷蒙在他著名的论文——《大教堂与市集》中说明了利用开源社区进行软件开发的显著优势:

大教堂模式(The Cathedral model):由专门的公司组织专门人员开发的闭源软件或者源代码在软件发行后公开的项目,软件的每个版本在开发过程中是由一个专属的团队所控管的。

市集模式(The Bazaar model):源代码在开发过程中即在互联网上公开,供人检视及开发。以Linux核心的创始者林纳斯·托瓦兹带领Linux核心的开发为例

“让够多人看到源代码,错误将无所遁形”(Given enough eyeballs, all bugs are shallow)。作者表示大教堂模式的软件开发让程序开发过程中调试debug的时间大幅增加,因为只有少数的开发者可参与修改工作。市集模式则相反。

大教堂与市集也被延伸到非电脑软件的开发上面。例如维基百科就是市集模式,而Nupedia与大英百科全书就是大教堂模式。

为了保护开源软件不被闭源模式的企业或个人用以牟利,保证共享精神的延续,GUN计划中提出了“Copyleft”一说。开源软件其实是有版权的,只不过大家会选择一些协议,来保证在你赋予用户获得、修改和重新发布你作品的同时,也必须保持你给他同样的权利来发布他改进后的作品,不多也不少,这样才能保证分享的劳动成果不会被私吞及派生作品的延续。有人也将其译为“著佐权”,以彰显Copyleft是补足著作权(Copyright, 版权)不足的意义。另有译为“反版权”、“版权属左”、“脱离版权”、“版权所无”、“版权左派”、“公共版权”或“版责”,但这些译名的其中几个在意义上有所偏差。Copyleft许可协议不反对著作权的基本体制,却是通过利用著作权法来进一步地促进创作自由。

开源软件的商业模式

对于开源模式开发软件的商业化进程,可谓好事多磨。不仅在开源社区诞生之初,就遭到了微软等传统闭源软件公司的敌视、魔化甚至企图给开源运动者安上某些罪 名后将他们送上法庭。时至今日,许多对开源运动全貌并不了解的人也会时常像宣扬世界末日论那样宣称开源软件的商业模式行将就木。

一位开源元老埃里克·雷蒙时常说,如果你跟被人提及开源软件,比较幸运的反应是:

“自由/免费软件?那一定是质量差、不稳定、不可靠、没保障的东西。”

如果比较不幸,你得到的回应就会是:

“自由软件协会的行为严重侵犯了尊重知识的专利社会,是无耻的商业交易行为。”

不过,摆在走开源模式的工程师、商人、企业家、服务商面前的第一节事就是,如何赚钱。虽然开源运动创立之初是由不少乐于分享和贡献的开发者所带动的,但开 源的软件开发模式本身也应该寻求自己的盈利模式,否则开源运动就难以回答比尔盖茨给“佳酿电脑俱乐部”的公开信中所提到的问题————“谁会以没有报酬的 事情作为自己的职业?”,而开源运动也难以持续扩大。就好比我拿起一把扫把自发的去扫大街,确确实实为社会做出了贡献,但是如果不能从中获得合理的回报, 这样的行为又能发动多少人去做,长期不懈的去做并且做好。

首先我们来探讨传统软件的盈利模式,闭源软件的盈利模式显而易见,自己封闭起来开发,然后把软件产品放出去卖钱,如果你知道程序设计的原理,你就会明白经 过编译以后的程序几乎是不可能让人看懂的,只能给机器执行,这样的软件你很难改进或者学习研究。他们的理由是这些代码或者算法是公司的商业机密,公司赖以 生存的摇钱树和镇山之宝。这样的软件,升级和漏洞问题等售后服务只能由他们独家提供,也难怪这些公司的服务会如此的差了。

那开源软件呢?开源软件由社区协力开发,谁都可以免费下载获得并装载运行在自己的计算机上。这些开发者如何从中牟利?事实上当软件用于商业用途,那么就会 有许多问题需要人来维护支持,甚至是在原有的软件上修改或者二次开发,以适应自己的需求,那么我们的生意就来了。而且,对开源软件贡献越多,在开源社区声 望越高的开发者,雇用他们进行商业维护和开发的费用也越高,又或者他们可以成立咨询服务公司,开源社区事实上为开发人员提供了一个完全开放的展现自己实力 的舞台,正所谓是骡子是马,拉出来遛一遛就知道。开源可以让整个社会都重用别人开发过的优秀代码,如果可以重用的部分就用上,不需要自己再从头写一遍,而 只需要专注尚未有解决方案的新功能、新需求,这让整个社会都提高了资源的利用率。

开源运动扩大至电子领域

在过去的十多年里,陆续有一些开源易用的电子硬件平台诞生,譬如BasicStamp、beagleboard、wiring。但真正让开源硬件运动形成风潮,当属Arduino的贡献最大。

Massimo Banzi是意大利一所设计学校的老师。他的学生们经常抱怨找不到便宜好用的微控制器来发挥他们在设计上的创意。2005年冬天,Massimo Banzi 跟David Cuartielles 讨论了这个问题。David Cuartielles 是一个西班牙的芯片工程师,当时在这所学校做访问学者。两人决定设计自己的电路板,并请Banzi 的学生 David Mellis 为电路板设计编程语言。数天后,电路板和开发环境都已完成,并被命名为Arduino。这个名字的灵感来源于意大利历史上一位著名的国王。

很快,他们发现这块板子交给即使完全不懂电子和程序设计的学生,只要稍加指点,他们就能用 Arduino 迅速实现他们的创意,做出很酷的东西。之后,三位作者把设计放到了互联网上,数月后,他们的设计作品就在网上得到了迅速的传播。

现在,Arduino论坛和社区不仅有了大量的访问者和支持者,诸如Seeed Studio和Sparkfun这样为Arduino提供扩展电子元器件的公司也越来越火热。通过这些资源,即使没有电子基础的人也很快能够上手,实现自己在电子设计方面的想法,而且论坛和社区上也不乏乐意分享和指导初学者的高手们。

这些实现想法的爱好者们实现形形色色的制作,有自己DIY的机器人、3D打印机、CNC机床甚至更多我们闻所未闻的稀奇古怪的东西。

前《连线》杂志主编克里斯安德森将这些人称为创客,并且他自己也亲身投入到这场创客运动之中,并撰写了一本书《创客——新工业革命》来鼓励大家不仅把想法实现,还可以做成产品拿到市场上去卖,成为从创客到创业者的新路线。

而一向关注IT行业技术与未来的奥瑞来出版社也按捺不住,出版了《爱上制作》和一系列相关的图书来普及Arduino和创客文化,让这场运动变得越来越平民化、大众化。

开源运动的新领地,产品的开发与制造

我们生活在真实的物理世界中,我们需要食物、饮水还有各种各样实在的物品来实现我们生活质量的提高。然而在过去数十年计算机技术迅速发展的影响下,电脑、 手机、互联网等深深影响着我们的生活,似乎这些存在于虚拟的“比特”世界中的东西似乎对我们更有吸引力。因为它们发展得很快,而我们真实生活中各种产品的 进步却远远滞后了。

由软件带动着电子,还有3D打印机这样一类新工具,是时候把我们的创造力重新拉回到真实的“原子”世界了。不仅仅人人都有权利去享有甚至参与优秀的软件开发、电子制作,而一向被视为高门槛的制造业也将要迎来开源的新革命。

设想某一天,特例独行的你不喜欢现在的iPhone单调的外观,你不仅不想在大街上“撞衫”,还希望自己的物品都有用定制的特性,想自己设计制造一个iPhone外壳,那么你可能就会先到网上去挑选你喜欢的模型,在电脑里打开并自行改造一番,连上自己的3D打印机,将你想要的外壳立即打印出来。

还有一种产品开发模式,就是当你觉得有需求要做点什么,而市场上找不到现成产品或者你觉得你可以做得更便宜的时候,其实你无意中已经驱动了商业产品开发的第一步,如果你所做出来的东西不仅仅是你需要,很多场合的其他人也需要,那你的东西就很有前景。

工程项目在开发的不同阶段有着不同的特点,当上述的发现需求的阶段过后,我们就进入了一个称之为头脑风暴的阶段,这个阶段是要在创意和风险之间寻找到一个 平衡点,不仅创意可能多种多样,实现的途径也可能有多种方案。正所谓送欲善其事,必先利其器,这个时候,柔性化的产品开发工具就是我们必不可少的利器。通 常的产品往往并非纯机械、纯电子或者纯软件的产品,而绝大多数是这三者的有机结合。

综上所述,我们在软件和电子领域已经有了许多不错的柔性工具可以使用,那机械方面呢?也许有人会提出使用乐高,创客中也有结合Arduino和 乐高积木来制作机电一体化产品的提议。乐高已经是在机械灵活度上最好的一个,但它的价格太贵而且是塑料材质,乐高本身的电子模块也不多,可以扩展使用的各 类传感器、电机非常有限。此外,乐高是完全封闭的平台,乐高官方几乎没有为人们在扩展其零件和功能上给出足够的支持,也难怪乐高的电子模块如此有限。如果 你基于乐高制作新零件并发放到市场,弄不好还会因为无意中触犯专利的法律条文而被乐高公司告上法庭。

我们的目标和工作——构建开源的百搭平台

在以Arduino为代表的趣味电子制作和开源软硬件运动浪潮席卷国外后,动手制作一些自己感兴趣的项目已经不再是少数技术极客们的专利。随后又出现了各类易学易用的图形化编程工具,大大降低了大众进入这个领域的门槛。而机械结构领域的发展则最为滞后,这也是日后可能做的一些工作的出发点。

如果有一个类似乐高但更加专业、便宜且开放的机器人搭建平台,将会受到欢迎。因此就有创客创业团队开始尝试创立Makeblock这个品牌。Makeblock是一款铝积木式的结构模块和电子模块的组合,包括基本结构部件,传动部件,电机,传感器,控制器等等。主要零部件是铝合金材质,以Arduino作为控制器。利用此平台,你可以在很短的时间里实现自己的各种创意想法。动手制作机器人或者自动化装置的原型需要制作者同时拥有机械、电子、软件编程三个方面的专业技能,这就使得DIY这类产品的门槛很高。Makeblock主要宗旨是让制作变得很简单,让初学者易于登堂入室,人人都可以动手做一些东西,并体会其中的乐趣,让大人和孩子们共同学习。

和传统的玩具制造商不同,Makeblock不仅打算把它出售给那些需要的人,还要发动大家来提意见来共同提升和改善大家的百搭平台。Makeblock是完全开源的,不仅仅软件、电路板还有机械零件的设计思路和图纸,我们都会放到社区上供大家一起来讨论,借助中国物美价廉的制造水平向世界提供一流的开放搭建工具。相信有了这样的工具以后,大家的许多想法会更快更容易的验证并实现。

3D打印机和百搭平台的优势互补

最近两年,由于3D打印机的火热和普及,越来越多的人尝试使用3D打印来验证或者制造他们自己的产品。这是个不错的选择,不过3D打印仍有许多问题需要克服,相对于传统的制造业,他尚未能完全替代,我认为主要有以下几个因素:

1、    平民级的3D打印机多使用的是有限的几种塑胶材料,某些场合塑胶材料不能满足需求。

2、    大部分3D打印机的精度仍然较低,而且打印耗材较贵,显然不适合大批量生产。

3、    3D打印较为费时,对操作者也需要一定的经验和三维建模能力,并未能做到快速上手。

在头脑风暴阶段,许许多多的想法需要验证,所以百搭平台仍然有其不可替代的优势,尤其是开放的百搭平台,会有更多零件合供应商可以选择。

而我个人的观点是,与其争论到底是百搭平台还是3D打印好,何不把他们结合起来用于产品的开发制作当中,我们可以先选择百搭中的零件,相信它们能满足你产品中大部分零件的需求,也就是起到搭骨架的作用,许多需要装饰或者特殊要求的零部件,尤其是起到装饰设计的部分,你就可以使用3D打印机来制造,这些零件往往不需要承受负载,使用塑胶件既美观又轻便。

感谢大家花费宝贵的时间来阅读这篇文章,如果你对这篇文章有任何意见或者建议想与我交流,请随时欢迎大家与我联系,我是创元素发起人@张成wust