岭南文化集邮册高清图片和PDF分享地址
https://pan.baidu.com/s/18dDAneIiBoHigest_gL5LQ
如有问题,请转到https://sns.io/sell/B7SKVKUU获得提取码
https://pan.baidu.com/s/18dDAneIiBoHigest_gL5LQ
如有问题,请转到https://sns.io/sell/B7SKVKUU获得提取码
记得高中的时候参加生物竞赛,老师推荐我们去多看看北京大学已故教授陈阅增老先生编写的陈阅增普通生物学。因为那个时候高中生物的三本教材实在是内容不多,在广州市重点高中读书的我们很快就已经阅读完这三本薄薄的生物学,对于在科学道路上喜欢不断欣赏新风景的我显然不能满足胃口。
到了大学,无论是临床的同学,还是普通理工农医类相关专业,我们窒碍于厚厚而黑白的课本而丧失了很多读下去的耐力,直到我看到清华大学吴庆余老师出版的基础生命科学一书,才觉得每个人的第一本生物入门教材都应该是彩色铜版纸才对的。
在一次偶然的机会,我听到了来自台湾清华大学特聘教授李佳维老师的公开课,我才知道原来我们与真正一流的科学教育还有这么远的距离,李老师拿出的这本名为Campell Biology的课本和课堂附带的录像及PPT资料之配图令人震撼,课堂上老师叮嘱下面每一个学生说:这本书很厚,但是可以作为我们一生的很好的朋友,并且这是一本原文书,如果你没有足够的时间弄懂每个字句,那么你至少要看懂和能向其他人讲解这其中一幅幅精美的插图。以图读书,以图阐述概念,这是从起步的教育开始就向同学们培养正确的科学精神和态度,相信很多同学和我一样,这是个往往到了研究生阶段才会幡然醒悟的概念。
我所以花费大半天的时间写下这段心路历程和大家分享并大力推荐这本书,最主要的原因就是我希望后来同学们不要重蹈覆辙,少走弯路,从最好的材料开始上手学习生物学,一步到位。这对于生物医学农林等相关专业的同学来说,尤其希望留学的同学们来说更是重要的。对于正在中学阶段学有余力的同学,能从这本书中选读部分感兴趣的部分也是大有裨益的。
Campbell Biology 是所有生物专业的学生必学的课本,其所涵盖的知识深度和广度使其成为生物圈最经典的教材之一。所有生物类学术竞赛,从热门的美国生物竞赛USABO, 英国生物几个能赛BBO到即便是级别最高的IBO,Campbell Biology都能作为完美的课本。
生物学不同于诸如电子通信、计算机等其它理工学科,专业名词众多且不易上手,因此对于初学者来说,在尽量阅读原文书的同时能有权威可靠的中译本参照就显得尤为重要。
台湾地区诸多高校采用此书作为大一相关专业教科书,第八版是其经典版本,有权威的台湾繁体中译本对照本,可惜广大的莘莘学子们不易弄到,我们通过较为繁复困难的渠道才获得。希望获取阅读此书或获得购买渠道的朋友欢迎通过微信ID:zhangchengwust与我联系,可以向您提供相关资讯。
此书籍英文版已经通过:http://www.i-element.org/campbell-biology/ 分享
配合台湾国立清华大学开放式课程阅读,对于生命科学(包括理、医、农)相关的同学作为专业基础第一门入门课非常合适,相关专业从业人员或爱好者作为必备资料时常翻阅也是极好的。
09801 生命科學院 生命科學一(访问不了或不稳定请自行寻找科学方法)
http://ocw.nthu.edu.tw/ocw/index.php?page=course&cid=16&
09802 生命科學院 生命科學二(访问不了或不稳定请自行寻找科学方法)
http://ocw.nthu.edu.tw/ocw/index.php?page=course&cid=17&
簡 錄:
第 1 章 緒論:生命研究的主題
第 2 章 生命的化學內涵
第 3 章 水與環境適存度
第 4 章 碳與生命分子的多樣性
第 5 章 生物性大分子的結構與功能
第 6 章 細胞之旅
第 7 章 膜的結膜與功能
第 8 章 代謝作用之簡介
第 9 章 細胞呼吸:化學能量的獲取
第 10 章 光合作用
第 11 章 細胞通訊
第 12 章 細胞週期
第 13 章 減數分裂與有性生命週期
第 14 章 孟德爾與基因概念
第 15 章 遺傳的染色體基礎
第 16 章 遺傳的分子基礎
第 17 章 從基因至蛋白質
第 18 章 基因表現的調控
第 19 章 病毒
第 20 章 生物科技
第 21 章 基因體及其演化
第 22 章 累世修飾:達爾文的生命觀
第 23 章 族群的演化
第 24 章 物種起源
第 25 章 地球上生命的歷史
第 26 章 種系發生與生命樹
第 27 章 細菌與古菌
第 28 章 原生生物
第 29 章 植物多樣性之一:植物如何移居陸地
第 30 章 植物多樣性之二:種子植物的演化
第 31 章 真菌
第 32 章 動物多樣性之簡介
第 33 章 無脊椎動物
第 34 章 脊椎動物
第 35 章 植物的構造、生長與發育
第 36 章 維管植物的資源獲得與運輸
第 37 章 土壤與植物營養
第 38 章 被子植物的生殖與生物科技
第 39 章 植物對內在訊號與外在訊號的回應
第 40 章 動物構造與功能之基本原理
第 41 章 動物的營養
第 42 章 循環與氣體交換
第 43 章 免疫系統
第 44 章 滲透調節與排泄
第 45 章 激素與內分泌系統
第 46 章 動物的生殖
第 47 章 動物的發育
第 48 章 神經元、突觸和傳訊
第 49 章 神經系統
第 50 章 感覺機制與運動機制
第 51 章 動物行為
第 52 章 生態學與生物圈概論
第 53 章 族群生態學
第 54 章 群落生態學
第 55 章 生態系
第 56 章 保育生物學與復育生物學
预览版
如果您在移动设备上无法预览上面的PDF文档,您还可以点击这里访问
https://pan.baidu.com/s/1b69t1N3QV4_-DorodWDxaQ
Encyclopedia of Electronic Components Vol.1 电子元器件百宝箱-电源与转换
Resistors, Capacitors, Inductors, Switches, Encoders, Relays, Transistors
Encyclopedia of Electronic Components Vol.2 电子元器件百宝箱-信号处理
LEDs, LCDs, Audio, Thyristors, Digital Logic, and Amplification
Encyclopedia of Electronic Components Vol.3 电子元器件实用手册-传感器篇
Sensors for Location, Presence, Proximity, Orientation, Oscillation, Force, Load, Human Input, Liquid and ... Light, Heat, Sound, and Electricity
预览版
如果您在移动设备上无法预览上面的PDF文档,您还可以点击这里访问
如果您在移动设备上无法预览上面的PDF文档,您还可以点击这里访问
如果您在移动设备上无法预览上面的PDF文档,您还可以点击这里访问
《爱上制作:电子元器件百宝箱(第1卷)》就如同一个收纳了多种元器件的百宝箱,在功率器件、电磁、分离半导体的大层级下,又下设多个子集,子集下分为28个元器件条目。每个条目中,分别讲解该元器件可以做什么、如何工作、演变、参数、如何使用和禁止事项。《爱上制作:电子元器件百宝箱(第1卷)》每个条目是相对独立的,你可以从任何感兴趣的章节开始阅读,寻找有用的知识点。你也可以像使用工具书一样使用它,遇到疑惑时翻阅查找。
本书是《电子元器件百宝箱(第1卷)》的续篇,它如同一个收纳了多种元器件的百宝箱,针对LED、LCD、音频、晶闸管、放大器等信号处理问题做了阐述。全书分为分立半导体、集成电路、光源与指示器、声源几个大类,每个大类再分成若干个小项目,比如二极管、比较器、定时器、解码器、LED指示器等,各个项目再从功能、工作原理、演变过程、参数、使用方法、注意事项等方面做详细介绍。
本书介绍常用电子元器件的基本信息、工作原理、使用方法、参数、注意事项等,便于初学者查找相关元器件的应用方式,本卷侧重于介绍传感器,包括GPS、磁力计、红外传感器、倾斜传感器等。作为电子元件工具书介绍它们的经典用途的同时,本书又以DIY的新角度介绍如何在项目中使用这些电子元件,可以为制作爱好者提供准确的信息,甚至当作一部工具书来使用。本书是一本全彩的电子元件百科全书,读者可以在本书中得到所有想知道的信息,并且在风格上继承和发扬了《爱上制作》系列书的生动活泼,以DIY的新角度介绍如何在项目中使用传统的电子元件。想知道如何熟练使用电子元器件吗?本丛书第三卷(共三卷)中涵盖了您在项目中会用到的传感器关键知识---包含照片、原理图和表格。通过本书您可以了解到各个器件的用途、工作方法、其中蕴含的道理、了解不同类型的衍生器件。不管是电子行业的新手还是高手,都可以在本书里探索到新的知识和技巧。1GPS11.1它可以做什么11.1.1原理图符号11.1.2GPS子模块11.2它如何工作11.3演变21.4参数21.5如何使用它21.5.0每秒脉冲输出数31.6禁止事项31.6.1静电放电31.6.2接地不良31.6.3虚焊31.6.4许可限制31.6.5搜星失败31.6.6速度或高度超出限定值32磁力计52.1它可以做什么52.1.1原理图符号52.1.2IMU52.1.3应用52.2它如何工作62.2.1磁场62.2.2地轴62.2.3线圈磁力计72.2.4霍尔效应和磁阻72.3演变72.4如何使用它82.5禁止事项82.5.1磁干扰82.5.2安装不当83物体检测传感器93.1它可以做什么93.1.0原理图符号93.2演变103.3光检测103.3.1透射型光传感器113.3.2对射型光传感器123.4磁传感器133.5簧片开关133.5.1簧片开关种类143.5.2簧片开关参数143.5.3如何使用簧片开关143.6霍尔效应传感器143.6.1霍尔效应传感器的工作原理153.6.2霍尔效应传感器的种类153.7参数153.8如何使用霍尔效应传感器153.9如何使用物体检测传感器153.9.1线性移动检测153.9.2中断检测163.9.3角度检测163.10不同传感器的优缺点汇总163.10.1光学物体检测传感器的优点163.10.2光学物体检测传感器的缺点163.10.3簧片开关的优点163.10.4簧片开关的缺点163.10.5霍尔效应传感器的优点173.10.6霍尔效应传感器的缺点173.11禁止事项173.11.1光传感器173.11.2簧片开关174被动式红外传感器194.1它可以做什么194.1.1原理图符号194.1.2应用194.2它如何工作194.2.1热释电传感器204.2.2检测单元204.2.3镜头组204.3演变224.4禁止事项234.4.1高温灵敏度衰减234.4.2检测窗口损坏234.4.3受潮235距离传感器255.1它可以做什么255.1.1原理图符号255.1.2应用255.2演变255.2.1超声波255.2.2红外线265.2.3相对优势265.3常见的超声波传感器265.3.1进口产品
可读性强,甚至都适合青少年了解电子元器件阅读,有不少实物照片和理论图示。不过应用方面介绍不多。
非常好的电子器件分类手册。
如果您不熟悉电子产品,这是我推荐的第一本书。它从器件的基础开始,一直到集成电路的设计。在这个版本中,作者专注于集成电路而不是分立元件电路。对于任何一个以模拟/数字电子设备开始的人,我肯定会推荐它。到目前为止研究过的其他书籍,在没有任何理由或解释的情况下,没有太多细节的情况下,将某些东西或某些方程式抛弃了。到现在,我在这个版本中从未遇到任何此类困难。已经解释了所有的概念和方程,没有任何逻辑缺陷。
https://pan.baidu.com/s/1CJm6j6-cSeSJU4ozldfXfA
https://pan.baidu.com/s/1Iy4HxgC8pUMpAlfVuSd7wA
https://pan.baidu.com/s/16JGGuO_Gdt2aUY8zg7YohA
预览版
如果您在移动设备上无法预览上面的PDF文档,您还可以点击这里访问
James Stewart的《Calculus》是海外优秀数学类教材系列丛书之一,在美国,占领了50%-80%的微积分教材市场,其用户包括耶鲁大学等名牌院校及众多一般院校600多所。《微积分》历经多年教学实践检验,内容翔实,叙述准确、对每个重要专题,均用语言地、代数地、数值地、图像地予以陈述。作者及其助手花费了三年时间,在各种媒体中寻找了最能反映应用微积分的教学实例,并把它们编入了教材。因此,《微积分》例、习题贴近生活实际,能充分调动学生学习的兴趣,此外。《微积分》语言朴实、流畅.可读性强,比较适合非英语国家的学生阅读。值的一提的是,《微积分》较好地利用了科技。随书附赠两张CD-ROM,一张称为“感受微积分”,提供了一个实验环境,如同一个无声的老师,用探索、发现式的方法逐步引导学生分析并解决问题,还能链接到学习网站www.stewartcalculus.com。另一张称为“交直学习微积分”,包含有与微积分教学有关的视频与音频等。
詹姆斯·斯图尔特(James Stewart),毕业于斯坦福大学和多伦多大学,并在这两所大学分别取得了硕士和博士学位;曾在伦敦大学从事研究工作;在斯坦福大学期间深受数学教育大家乔治·波利亚(George Polya)的影响;现为加拿大麦克马斯特大学的数学教授。他的研究领域是调和分析。他所编写的若干本微积分以及微积分基础的教科书都十分畅销。
考研数学如果你先用过James Stewart的这本极品圣书,什么登登、乐乐、先开、正元都是你身后的小山丘……
有了三轴加速度计,又有开源的相关代码库,我们非常容易就能搜集加速度的数据。我使用Arduino控制板和OpenLog数据转录模块,收集了大量的加速度数据。不过问题也随之而来,你如何处理这样杂乱而庞大的数据来获得有用的信息?
这里有个非常实际的应用案例,多年前,我就搭建了一个基于拳击速度球沙袋的计数器,如上图所示。对于那些拳击初学者来说,速度球是一个液滴状的袋子,初练的拳击手通过快速打击它来训练他们的肩膀和发展手脑协调。 标准回合是三分钟,并且由于袋子弹跳的速度极快,几乎不可能通过人工来计数。我决定搭建一个计数器,然后抽空慢慢改进,仅仅把加速度计脸上带有显示器的Arduino控制板,然后就万事大吉了吗?不他们说得对,如何处理搜集到大量数据的算法才是大麻烦。
加速度计采集到的数据凌乱了我的天
为了看出这些返回数据中的端倪,我试图把这些数据转换成图形来显示。然而现实很残酷,真实世界中的有用信号总是被各种因素的噪声掩埋在其中。
不,等等,亮瞎我眼的是似乎这些数据还是有一定的周期性和特点的,元芳你怎么看?
不幸的是,即便采样频率高达500赫兹,这些数据仍然无法给出显而易见的信息,我要统计的击打次数。拿着这些数据,我用尽洪荒之力来构建一套能够统计出击打次数的系统。
/*
BeatBag - A Speed Bag Counter打击沙袋-一个击打次数统计器
Nathan Seidle(老板名字)
SparkFun Electronics火花快乐电子
2/23/2013(代码编写时间)
License: This code is public domain but you buy me a beer if you use this and we meet someday (Beerware license).
许可协议:本代码属于公共域,你想拿去干嘛就干嘛,不需要询问任何人或组织更不用付费,甚至连代码源出处都不用标明。但如果你用了这些代码并且某天邂逅了我,你要买瓶啤酒请我喝(啤酒协议)
BeatBag is a speed bag counter that uses an accelerometer to counts the number hits. 打击沙袋是一个用加速计来统计沙袋被击打次数的装置。
It's easily installed ontop of speed bag platform only needing an accelerometer attached to the top of platform. 它很容易安装在速度袋平台的顶部,只需要一个加速度计连接到平台的顶部。
You don't have to alter the hitting surface or change out the swivel.
你不需改变击球表面或改装它。
I combine X/Y/Z into one vector and look only at the magnitude.
我将XYZ三轴的矢量合成,并只通过观察最终数量大小获取信息
I use a fourth order filter to see the impacts (accelerometer peaks) from the speed bag. It works pretty well.
我使用四阶滤波器来获取冲击信号(加速度峰值),效果还不错。
It's very reproducible but I'm not entirely sure how accurate it is. I can detect both bag hits (forward/backward) then I divide by two to get the number displayed to the user.
此结果还是能较好的重现,不过我不是很确定它的准确性。它能检测到冲击时的前和后然后获得相关数据。
I arrived at the peak detection algorithm using video and raw data recordings. After a fourth filtering I could glean the peaks. There is probably a much better way to do the math on the peak detection but it's not one of my strength.
我通过对比视频记录和峰值检测的算法,在四阶滤波后搜集到此峰值。很可能有更好的算法的相关数学处理步骤来进行峰值的检测,不过这似乎不是我的强项。
Hardware setup:硬件连线指导:
5V from wall supply goes into barrel jack on Redboard. Trace cut to diode. RedBoard barel jack is wired to power switch then to Vin diode. Display gets power from Vin and data from I2C pins Vcc/Gnd from RedBoard goes into Bread Board Power supply that supplies 3.3V to accelerometer. Future versions should get power from 3.3V rail on RedBoard.
MMA8452 Breakout ------------ Arduino
3.3V --------------------- 3.3V
SDA(yellow) -------^^(330)^^------- A4
SCL(blue) -------^^(330)^^------- A5
GND ---------------------- GND
The MMA8452 is 3.3V so we recommend using 330 or 1k resistors between a 5V Arduino and the MMA8452 breakout.
The MMA8452 has built in pull-up resistors for I2C so you do not need additional pull-ups.
3/2/2013 - Got data from Hugo and myself, 3 rounds, on 2g setting. Very noisy but mostly worked
12/19/15 - Segment burned out. Power down display after 10 minutes of non-use.
Use I2C, see if we can avoid the 'multiply by 10' display problem.
1/23/16 - Accel not reliable. Because the display is now also on the I2C the pull-up resistors on the accel where not enough. Swapped out to new accel. Added 100 ohm inline resistors to accel and 4.7k resistors from SDA/SCL to 5V.
Reinforced connection from accel to RedBoard.
*/
#include <avr/wdt.h> //We need watch dog for this program
#include <Wire.h> // Used for I2C
#define DISPLAY_ADDRESS 0x71 //I2C address of OpenSegment display
int hitCounter = 0; //Keeps track of the number of hits
const int resetButton = 6; //Button that resets the display and counter
const int LED = 13; //Status LED on D3
long lastPrint; //Used for printing updates every second
boolean displayOn; //Used to track if display is turned off or not
//Used in the new algorithm
float lastMagnitude = 0;
float lastFirstPass = 0;
float lastSecondPass = 0;
float lastThirdPass = 0;
long lastHitTime = 0;
int secondsCounter = 0;
//This was found using a spreadsheet to view raw data and filter it
const float WEIGHT = 0.9;
//This was found using a spreadsheet to view raw data and filter it
const int MIN_MAGNITUDE_THRESHOLD = 1000; //350 is good
//This is the minimum number of ms between possible hits
//We use this to filter out peaks that are too close together
const int MIN_TIME_BETWEEN_HITS = 90; //100 works well
//This is the number of miliseconds before we turn off the display
long TIME_TO_DISPLAY_OFF = 60L * 1000L * 5L; //5 minutes of no use
int DEFAULT_BRIGHTNESS = 50; //50% brightness to avoid burning out segments after 3 years of use
unsigned long currentTime; //Used for millis checking
void setup()
{
wdt_reset(); //Pet the dog
wdt_disable(); //We don't want the watchdog during init
pinMode(resetButton, INPUT_PULLUP);
pinMode(LED, OUTPUT);
//By default .begin() will set I2C SCL to Standard Speed mode of 100kHz
Wire.setClock(400000); //Optional - set I2C SCL to High Speed Mode of 400kHz
Wire.begin(); //Join the bus as a master
Serial.begin(115200);
Serial.println("Speed Bag Counter");
initDisplay();
clearDisplay();
Wire.beginTransmission(DISPLAY_ADDRESS);
Wire.print("Accl"); //Display an error until accel comes online
Wire.endTransmission();
while(!initMMA8452()) //Test and intialize the MMA8452
; //Do nothing
clearDisplay();
Wire.beginTransmission(DISPLAY_ADDRESS);
Wire.print("0000");
Wire.endTransmission();
lastPrint = millis();
lastHitTime = millis();
wdt_enable(WDTO_250MS); //Unleash the beast
}
void loop()
{
wdt_reset(); //Pet the dog
currentTime = millis();
if ((unsigned long)(currentTime - lastPrint) >= 1000)
{
if (digitalRead(LED) == LOW)
digitalWrite(LED, HIGH);
else
digitalWrite(LED, LOW);
lastPrint = millis();
}
//See if we should power down the display due to inactivity
if (displayOn == true)
{
currentTime = millis();
if ((unsigned long)(currentTime - lastHitTime) >= TIME_TO_DISPLAY_OFF)
{
Serial.println("Power save");
hitCounter = 0; //Reset the count
clearDisplay(); //Clear to save power
displayOn = false;
}
}
//Check the accelerometer
float currentMagnitude = getAccelData();
//Send this value through four (yes four) high pass filters
float firstPass = currentMagnitude - (lastMagnitude * WEIGHT) - (currentMagnitude * (1 - WEIGHT));
lastMagnitude = currentMagnitude; //Remember this for next time around
float secondPass = firstPass - (lastFirstPass * WEIGHT) - (firstPass * (1 - WEIGHT));
lastFirstPass = firstPass; //Remember this for next time around
float thirdPass = secondPass - (lastSecondPass * WEIGHT) - (secondPass * (1 - WEIGHT));
lastSecondPass = secondPass; //Remember this for next time around
float fourthPass = thirdPass - (lastThirdPass * WEIGHT) - (thirdPass * (1 - WEIGHT));
lastThirdPass = thirdPass; //Remember this for next time around
//End high pass filtering
fourthPass = abs(fourthPass); //Get the absolute value of this heavily filtered value
//See if this magnitude is large enough to care
if (fourthPass > MIN_MAGNITUDE_THRESHOLD)
{
//We have a potential hit!
currentTime = millis();
if ((unsigned long)(currentTime - lastHitTime) >= MIN_TIME_BETWEEN_HITS)
{
//We really do have a hit!
hitCounter++;
lastHitTime = millis();
//Serial.print("Hit: ");
//Serial.println(hitCounter);
if (displayOn == false) displayOn = true;
printHits(); //Updates the display
}
}
//Check if we need to reset the counter and display
if (digitalRead(resetButton) == LOW)
{
//This breaks the file up so we can see where we hit the reset button
Serial.println();
Serial.println();
Serial.println("Reset!");
Serial.println();
Serial.println();
hitCounter = 0;
resetDisplay(); //Forces cursor to beginning of display
printHits(); //Updates the display
while (digitalRead(resetButton) == LOW) wdt_reset(); //Pet the dog while we wait for you to remove finger
//Do nothing for 250ms after you press the button, a sort of debounce
for (int x = 0 ; x < 25 ; x++)
{
wdt_reset(); //Pet the dog
delay(10);
}
}
}
//This function makes sure the display is at 57600
void initDisplay()
{
resetDisplay(); //Forces cursor to beginning of display
printHits(); //Update display with current hit count
displayOn = true;
setBrightness(DEFAULT_BRIGHTNESS);
}
//Set brightness of display
void setBrightness(int brightness)
{
Wire.beginTransmission(DISPLAY_ADDRESS);
Wire.write(0x7A); // Brightness control command
Wire.write(brightness); // Set brightness level: 0% to 100%
Wire.endTransmission();
}
void resetDisplay()
{
//Send the reset command to the display - this forces the cursor to return to the beginning of the display
Wire.beginTransmission(DISPLAY_ADDRESS);
Wire.write('v');
Wire.endTransmission();
if (displayOn == false)
{
setBrightness(DEFAULT_BRIGHTNESS); //Power up display
displayOn = true;
lastHitTime = millis();
}
}
//Push the current hit counter to the display
void printHits()
{
int tempCounter = hitCounter / 2; //Cut in half
Wire.beginTransmission(DISPLAY_ADDRESS);
Wire.write(0x79); //Move cursor
Wire.write(4); //To right most position
Wire.write(tempCounter / 1000); //Send the left most digit
tempCounter %= 1000; //Now remove the left most digit from the number we want to display
Wire.write(tempCounter / 100);
tempCounter %= 100;
Wire.write(tempCounter / 10);
tempCounter %= 10;
Wire.write(tempCounter); //Send the right most digit
Wire.endTransmission(); //Stop I2C transmission
}
//Clear display to save power (a screen saver of sorts)
void clearDisplay()
{
Wire.beginTransmission(DISPLAY_ADDRESS);
Wire.write(0x79); //Move cursor
Wire.write(4); //To right most position
Wire.write(' ');
Wire.write(' ');
Wire.write(' ');
Wire.write(' ');
Wire.endTransmission(); //Stop I2C transmission
}
由于这是我非常不专业的尝试用自己想的滤波算法来抑制噪声,获得有效信息。到了这一步,我把相关流水数据的记录导入到LibreOffice(一款完全开源免费的办公软件,和微软的Offiice类似),然后通过里面的数学函数功能来尝试找个一个处理这些数据的算法,以获得真正合理有效的关于沙袋被击打次数的相关信息。从中,我得到了两个结论:
我非常确定这个问题有很大的改善空间。所以,我想出这么一个主意:我们来进行一场比赛,我们希望能得到这方面专业人士所提供的指导和帮助,我们可以籍由这次赛事来获得解决实际问题的思路和方法,汲取专业算法设计者的思路。你可以从这里获得相关的数据记录。这些文件的名字中已经给出了相关的击打次数,如果你不信,你可以从这里看到相关的视频的数据。
我们是开源硬件的忠实信徒。你的劳动成果必须以开源协议的方式发布,并且不能禁止此算法用于商业目的。我是没打算把这个东西做成产品的,不过如果某人想要运用此算法做一个击打计数器并出售它,用于盈利,你得乐见其成。
如何参与?
请发送你的相关链接到反馈或文章评论区。我们将会在本月底结束此竞赛(6月30日)。我会公布采用新算法在健身房直接测试新的解决方案。天外有天,人外有人,如果有不止一个非常成功的解决方案,我们将随机抽取一位作为比赛的胜利者,并公布所有测试通过成功者的结果和相关信息。
等一等,赢了这个比赛会有什么奖励?Wait, wait. So what do I win?
我们将邀请你和你的一位陪同者乘飞机来丹佛,下榻博尔德的酒店,带你参观火花快乐电子公司并一起去前沿拳击协会一同看看你代码的最终成果,一起在博尔德吃喝玩乐一把。如果你来自美国以外的地区,我们就只能提供你一人的往返机票,而非两人。
截止到2016年7月5日:感谢所有的参与者!即日起停止接受参赛方案,因为我们需要花费一段时间来测试每位参与者提交的解决方案。许多人的方案看起来都超赞的,我们会尽快发布更新消息并公布得胜者。
今年(2016)早些时候,Nathan Seidle,Sparkfun(火花快乐)的创始人,提出了要众包一个算法问题。参加此众包的方案经过筛选后,最终有一位参与者的方案被选中。他就是此次众包竞赛的获胜者Barry Hannigan,我们希望他能将解决这个算法难题的过程整理成一篇教程来给大家带来更多的启发。这篇文章就是关于他是如何解决一个现实世界中的问题,即使这个问题并不是你目前工作中棘手的难题,但听他介绍如何一步一达成目标的过程,相信对于爱好开源硬件和热爱电子及软件技术的你也会感到醍醐灌顶。
点击下面托管在Github的程序,你可以得到这个问题的相关代码内容
由于获得了Nate的高速拳击计数比赛的胜利,我有幸去博尔特的火花快乐电子公司总部和老板Nate面基。在我们的面基探讨过程中,我们认为专门来写一篇关于如何在短时间内解决复杂问题的教程很有必要。我会就专门解决这个问题为例,我希望能为你日后将此思路用于解决自己的问题,无论问题规模的巨细。
对于一个完整的软件项目,从工程师的角度,都有四个过程:In full-fledged software projects, from an Engineer’s perspective, you have four major phases:
毋庸讳言,架构设计和代码实施似乎是所有热衷自己专业的软件工程师最感兴趣的工作。因为这些过程充满创造力和乐趣。自然低,就有一种只分析了问题的一个方面就开始进行架构设计和代码实施。然而,我一再强调上述过程中的第一步和最后一步对于项目的最终成功至关重要,无论问题的规模大或小。如果你对此质疑,你可以想象一下就我们这个沙袋计数问题可以快速设计架构,但由于我手头没有真实的装置去测试。但只需稍微简单的最终修改,就可以修复这些问题并最终获得正确的结果。反之亦然,一个漂亮的设计和优雅的实现但无法实现最终所需功能也是一个失败的方案。
我没有将原型设计列作专门的一个过程,因为根据具体的问题中,原型测试可能是一个过程的一部分或者多个过程中都包含。例如,如果一个问题并不是完全了解其实现过程,原型测试就可以帮助确定需求,至少可能一共一个概念的证明,或者验证一种技术是否有助于实现此方案。总而言之,原型测试在这些扮演的角色可能不止是一个过程。
回到这个案例中,即便如此小的项目,我也建议你花点时间把项目分成四个步骤来完成,否则你就很容易有疏漏。为了确定完整的项目需求,我们来罗列每个一个项目所需。网站上列出的需求可以罗列成五个部分。
这个案例中,我将Nate所做的工作视作为说明需求所做的原型尝试,用于指明不和构建有效的系统。通过Nate所叙述的系统搭建的过程,我们知道这是一个加速度计安装在速度球沙袋的基座上,这个系统的采样周期是2毫秒,我们也知道使用多项式平滑滤波方式可以布置信号的峰值,但却无法获得准确的击打次数统计。
对于这样一个较小的项目,在实现过程中,我们尝试不要太贵规范化的罗列目标(需求分析):
现在正式开始进入项目解决问题的阶段,因为现在需求已经明确了。时间紧迫,由于我没有硬件来测试并直观的看到算法实时处理数据的结果。我从在电脑上使用Java编程环境来验证测试。我先写程序让搜集到的数据以图表方式显示出来。我使用NetBeans这个Java开放环境多年了。其中,JFreeChart库用于绘制数据图表很赞,所以我先在项目中把此库引用进来。Netbeans用于构建图形化设计的开发非常棒。我只需要在现有的图形化接口上布局空白版面并且调用JFreeChart库在其上绘制图形。就像示波器视图一样很容易就能通过上述工具创建。
由于这个算法的测试时间紧迫,我第一次测试就尽可能用使用面向对象的方法,借助Java语言中已有的特性尽可能便捷达成目的。然后我再使用更像是C语言的算法操作步骤。我先直接从记录的数据中分别绘制出XYZ的图形。当我观察现有的每天数据记录,我觉得需要先去掉偏置量(例如重力加速度引起的),然后求此三个量的平方和的根。我把相邻的数值求平均数使曲线稍显平滑,用阀值之间的最小时间作为基准来执行滤波算法。 总之,这么干反而使得图表上显示的情况更加糟糕。我决定放弃X和Y分量,一方面是由于我不清楚它的安装方向,另一方面它也不可能每次都精确放置在同一个位置。对我来说糟糕的是,即使是只考Z轴分量,图像看起来还都是完全淹没在噪声之中。我发现流水数据的峰值之间非常相近。仅仅是我设定的峰值间最小时间差对记录击打次数有些意义,然而其它的数据似乎并没有太多有价值的信息。甚至有些时候计数都不随着击打而增加,问题到底出在哪里?
下面就是执行了函数runF1后生成的波形图。蓝色的信号是Z轴滤波后的数据,红色是用于记录击打时产生的峰值。正如我之前所说的那样,如果记录每次击打不设置最少250毫米时间间隔,计数器就会疯狂的增加技术。注意到我引入了两个5毫秒的延时来处理峰值,情况就会有所改善。如果把时间间隔提升到10毫秒,情况就会更加改善。 我稍后会谈到更多关于对正信号的问题,不过就这幅图来看,你就知道这个步骤对于获得准确结果何等重要。
蓝色信号是Z轴滤波后的图形,红色是其达到阀值时用于计数的击打
如果你仔细观察虚拟示波器的输出,你就会发现在25000毫秒到26000毫秒之间的1秒时间里,有9次显著的加速度事件发生。难怪Nate在这一秒记录了九次击打。那究竟在一秒的时间内,我们预期的击打次数能有多少?回到绘制的图形上,我需要增加些其它的近似方法或者约束。此时别忘了谦逊的美德,如果你趋之若鹜的骑上高马,那你也会被摔得更疼。
典型的需求分析报告中,包含了问题解答的有效域。类似于一些一元二次方程出现两个数学解答时,往往符合合理事实的答案只有其中一个。譬如根据条件列出包含未知数的方程中,未知数是年龄,长度等只可能是正数的解答,而数学解答给出了一正一负的结果。那么负数的解答显然是不合事实情理的,应该被舍去。
通常在进行问题的需求分析时,就已经确定下了问题的可行域。根据具体问题的场景相关知识以及设计方案的过程中,我们就知道问题解答范围的合理性。我对于拳击和速度球沙袋没有什么太多的认知,所以做些Google搜索就很有必要。
我得到一些重要的信息就是当拳击手每次击打速度球沙袋进行训练时,将会导致速度球沙袋三次与基座之间的剧烈接触牵引:一次向前(击打方向),然后向后(击打方向反向),然后再一次向前(击打方向),然后拳击手给出下一次击打。这样一次练习的周期中实际有4次重要的数据信息,一次来源与击打时的震动,另外三次源于沙袋和基座之间的摆动。
现在,我们看着波形信息的意义就明确多了。每次击打沙袋并不在数据上只产生一次峰值。我的第二个问题是,对于一个正常的拳击手来说,他每秒可以击打沙袋多少次。我尝试用直观常识去思考,这肯定有个合理的范围,然后我查找了和拳击竞技相关资料的网站,以及结合相关视频,我最后自己总结了一下,拳击手训练时每秒的击打次数应该在2次到4次是合理范围。这样,问题确定下来了,我需要寻找在加速度计采集的信息中,发生频率在2至4赫兹之间的事情。是时候开始进行架构设计和代码实施了。
虽然每个人在设计架构和代码实施阶段的思路方式可能略有不同,但我还是强烈建议你采用迭代的策略来进行这两个阶段的步骤,尤其当你对一个亟待解决的问题面前没有明确清晰的解决方案,而是采用尝试方式进行问题处理时。我还建议当你准备对现有算法进行重大调整时,最好创建一个新的副本再开始修改。或者创建一个新的空函数,拷贝进原来的代码来开始修改,这样的源代码迭代控制方法能够保证保留你原有的迭代路径,如果出现问题或者之前的源代码有值得引用的地方都可以避免随时可以返回。我通常不会再写完10-20行代码后还继续写,我会先想办法验证它运行后打印一些东西,以确认我的逻辑和假设是正确的。在我的整个“码农”生涯中,如果写代码时没有可以验证运行的目标机器,我是会抱怨这样的工作方式。在2006年时,我曾听到过一位海军上将的话:
- 美国海军上将迈耶斯
我非常认同他的说法,因为迭代验证能确保我的代码写的是按照我想要它运行的方向进发的。它要买确认你的假设要么揭示你正在朝错误方向迈进,无论哪一种,都能让你一直快速走在正确的道路上,而不会在做了一大堆工作之后才发现徒劳无功。这也是我选择在这次方案中使用Java作为原型开发平台的原因之一,因为我没有实际的硬件装置可用于测试,但我仍能快速的运行图示并测试代码。
下面你会考到6个runFx()的函数,函数以毫米为单位验证当前代码,并使我可以在Java绘图窗口中查看数据滚动和滤波后的外观。我将X,Y和Z加速度数据与X,Y和Z平均值一起传递。由于我在大多数算法中只使用Z数据,我忽略并发送其他值来绘制,所以当查看1到5的图形时会有点混乱,因为它们与图例不匹配。但是,实时绘图允许我查看数据并观察命中计数器增量。我实际上可以看到并感觉到击打的节奏感,以及加速度数据如何受到长时间恒定节奏下的共振的影响。除了使用Java System.out.println()函数的可视输出之外,我还可以将数据输出到NetBeans IDE中的窗口。
如果你看看我的GitHub上的Java子目录,有一个名为MainLoop.java的文件。 在该文件中,我有一些名为run1()到 run6()
的函数。 这些是我的速度袋算法代码的六个主要迭代。
这里是六个迭代中的每一个的一些要点。
runF1() 仅使用Z轴,并且使用Z数据的均值去掉偏置,使用滑动窗口和滤波。 我创建了一个称为延迟的元素,这是一种延迟输入数据的方式,因此可以稍后与平均结果的输出对齐。 这允许基于周围值而不是先前的值从Z轴数据中减去滑动窗口平均值。 击打检测使用大于五个样本的平均值的放大滤波器数据的直接比较,在检测之间的最短时差为250毫秒。
runF2() used only Z axis, and employed weak bias removal via a sliding window but added dynamic beta amplification of the filtered Z data based on the average amplitude above the bias that was removed when the last punch was detected. Also, a dynamic minimum time between punches of 225ms to 270ms was calculated based on delta time since last punch was detected. I called the amount of bias removed noise floor. I added a button to stop and resume the simulation so I could examine the debug output and the waveforms. This allowed me to see the beta amplification being used as the simulation went along.
runF3() used X and Z axis data. My theory was that there might be a jolt of movement from the punching action that could be additive to the Z axis data to help pinpoint the actual punch. It was basically the same algorithm as RunF2 but added in the X axis. It actually worked pretty well, and I thought I might be onto something here by correlating X movement and Z. I tried various tweaks and gyrations as you can see in the code lots of commented out experiments. I started playing around with what I call a compressor, which took the sum of five samples to see if it would detect bunches of energy around when punches occur. I didn’t use it in the algorithm but printed out how many times it crossed a threshold to see if it had any potential as a filtering element. In the end, this algorithm started to implode on itself, and it was time to take what I learned and start a new algorithm.
In runF4(), I increased the bias removal average to 50 samples. It started to work in attenuation and sample compression along with a fixed point LSB to preserve some decimal precision to the integer attenuate data. Since one of the requirements was this should be able to run on 8-bit microcontrollers, I wanted to avoid using floating point and time consuming math functions in the final C/C++ code. I’ll speak more to this in the components section, but, for now, know that I’m starting to work this in. I’ve convinced myself that finding bursts of acceleration is the way to go. At this point, I am removing the bias from both Z and X axis then squaring. I then attenuate each, adding the results together but scaling X axis value by 10. I added a second stage of averaging 11 filtered values to start smoothing the bursts of acceleration. Next, when the smoothed value gets above a fixed threshold of 100, the unsmoothed combination of Z and X squared starts getting loaded into the compressor until 100 samples have been added. If the compressor output of the 100 samples is greater than 5000, it is recorded as a hit. A variable time between punches gate is employed, but it is much smaller since the compressor is using 100 samples to encapsulate the punch detection. This lowers the gate time to between 125 and 275 milliseconds. While showing some promise, it was still too sensitive. While one data set would be spot on another would be off by 10 or more punches. After many tweaks and experiments, this algorithm began to implode on itself, and it was once again time to take what I’ve learned and start anew. I should mention that at this tim I’m starting to think there might not be a satisfactory solution to this problem. The resonant vibrations that seem to be out of phase with the contacts of the bag just seems to wreak havoc on the acceleration seen when the boxer gets into a good rhythm. Could this all just be a waste of time?
runF5()’s algorithm started out with the notion that a more formal high pass filter needed to be introduced rather than an average subtracted from the signal. The basic premise of the high pass filter was to use 99% of the value of new samples added to 1% of the value of average. An important concept added towards the end of runF5’s evolution was to try to simplify the algorithm by removing the first stage of processing into its own file to isolate it from later stages. Divide and Conquer; it’s been around forever, and it really holds true time and time again. I tried many experiments as you can see from the many commented out lines in the algorithm and in the FrontEndProcessorOld.java file. In the end, it was time to carry forward the new Front End Processor concept and start anew with divide and conquer and a need for a more formal high pass filter.
With time running out, it’s time to pull together all that has been learned up to now, get the Java code ready to port to C/C++ and implement real filters as opposed to using running averages. In runF6(), I had been pulling together the theory that I need to filter out the bias on the front end with a high pass filter and then try to use a low pass filter on the remaining signal to find bursts of acceleration that occur at a 2 to 4 Hertz frequency. No way was I going to learn how to calculate my own filter tap values to implement the high and low pass filters in the small amount of time left before the deadline. Luckily, I discovered the t-filter web site. Talk about a triple play. Not only was I able to put in my parameters and get filter tap values, I was also able to leverage the C code it generated with a few tweaks in my Java code. Plus, it converted the tap values to fixed point for me! Fully employing the divide and conquer concept, this final version of the algorithm introduced isolated sub algorithms for both Front End Processor and Detection Processing. This allowed me to isolate the two functions from each other except for the output signal of one becoming the input to the other, which enabled me to focus easily on the task at hand rather than sift through a large group of variables where some might be shared between the two stages.
通过功能模块的划分,现在就非常能够明晰的在送数据给检测计数模块前进行抵消偏置量的工作。现在,检测部分的代码模块可以专注于过滤和实现一个功能,筛选选择每秒2到4次之间发生的击打事件。
需要注意的一点是,这个最终算法比一些先前的算法简洁。 即使它的软件,在过程中的某个时候,你应该仍然做一个称为Muntzing的技术。Muntzing是一种技术,回去看看什么可以删除而影响功能。 简洁优雅的代码标准是:每行代码都必不可少,否则就会导致错误。 你可以通过Google Earl “Madman” Muntz来更好地理解和感受Muntzing的精神。
Final output of DET
Above is the visual output from runF6. The Green line is 45 samples delayed of the output of the low pass filter, and the yellow line is an average of 99 values of the output of the low pass filter. The Detection Processor includes a detection algorithm that detects punches by tracking min and max crossings of the Green signal using the Yellow signal as a template for dynamic thresholding. Each minimum is a Red spike, and each maximum is a Blue spike, which is also a punch detection. The timescale is in milliseconds. Notice there are about three blue spikes per second inside the 2 to 4Hz range predicted. And the rest is history!
这里简要介绍在各种算法中使用的个部分。
This is used to buffer a signal so you can time align it to some other operation. For example, if you average nine samples and you want to subtract the average from the original signal, you can use a delay of five samples of the original signal so you can use values that are itself plus the four samples before and four samples after.
Attenuation is a simple but useful operation that can scale a signal down before it is amplified in some fashion with filtering or some other operation that adds gain to the signal. Typically attenuation is measured in decibels (dB). You can attenuate power or amplitude depending on your application. If you cut the amplitude by half, you are reducing it by -6 dB. If you want to attenuate by other dB values, you can check the dB scale here. As it relates to the Speedbag algorithm, I’m basically trying to create clear gaps in the signal, for instance squelching or squishing smaller values closer to zero so that squaring values later can really push the peaks higher but not having as much effect on the values pushed down towards zero. I used this technique to help accentuate the bursts of acceleration versus background vibrations of the speed bag platform.
Sliding Window Average is a technique of calculating a continuous average of the incoming signal over a given window of samples. The number of samples to be averaged is known as the window size. The way I like to implement a sliding window is to keep a running total of the samples and a ring buffer to keep track of the values. Once the ring buffer is full, the oldest value is removed and replaced with the next incoming value, and the value removed from the ring buffer is subtracted from the new value. That result is added to the running tally. Then simply divide the running total by the window size to get the current average whenever needed.
This is a very simple concept which is to change the sign of the values to all positive or all negative so they are additive. In this case, I used rectification to change all values to positive. As with rectification, you can use a full wave or half wave method. You can easily do full wave by using the abs()
math function that returns the value as positive. You can square values to turn them positive, but you are changing the amplitude. A simple rectify can turn them positive without any other effects. To perform half wave rectification, you can just set any value less than zero to zero.
In the DSP world Compression is typically defined as compressing the amplitudes to keep them in a close range. My compression technique here is to sum up the values in a window of samples. This is a form of down-sampling as you only get one sample out each time the window is filled, but no values are being thrown away. It’s a pure total of the window, or optionally an average of the window. This was employed in a few of the algorithms to try to identify bursts of acceleration from quieter times. I didn’t actually use it in the final algorithm.
Finite Impulse Response (FIR) is a digital filter that is implemented via a number of taps, each with its assigned polynomial coefficient. The number of taps is known as the filter’s order. One strength of the FIR is that it does not use any feedback, so any rounding errors are not cumulative and will not grow larger over time. A finite impulse response simply means that if you input a stream of samples that consisted of a one followed by all zeros, the output of the filter would go to zero within at most the order +1 amount of 0 value samples being fed in. So, the response to that single sample of one lives for a finite amount of samples and is gone. This is essentially achieved by the fact there isn’t any feedback employed. I’ve seen DSP articles claim calculating filter tap size and coefficients is simple, but not to me. I ended up finding an online app called tFilter that saved me a lot of time and aggravation. You pick the type of filter (low, high, bandpass, bandstop, etc) and then setup your frequency ranges and sampling frequency of your input data. You can even pick your coefficients to be produced in fixed point to avoid using floating point math. If you’re not sure how to use fixed point or never heard of it, I’ll talk about that in the Embedded Optimization Techniques section.
Mag Square is a technique that can save computing power of calculating square roots. For example, if you want to calculate the vector for X and Z axis, normally you would do the following: val = sqr((X * X) + (Y * Y)). However, you can simply leave the value in (X * X) + (Y * Y), unless you really need the exact vector value, the Mag Square gives you a usable ratio compared to other vectors calculated on subsequent samples. The numbers will be much larger, and you may want to use attenuation to make them smaller to avoid overflow from additional computation downstream.
I used this technique in the final algorithm to help accentuate the bursts of acceleration from the background vibrations. I only used Z * Z in my calculation, but I then attenuated all the values by half or -6dB to bring them back down to reasonable levels for further processing. For example, after removing the bias if I had some samples around 2 and then some around 10, when I squared those values I now have 4 and 100, a 25 to 1 ratio. Now, if I attenuate by .5, I have 2 and 50, still a 25 to 1 ratio but now with smaller numbers to work with.
Using fixed point numbers is another way to stretch performance, especially on microcontrollers. Fixed point is basically integer math, but it can keep precision via an implied fixed decimal point at a particular bit position in all integers. In the case of my FIR filter, I instructed tFilter to generate polynomial values in 16-bit fixed point values. My motivation for this was to ensure I don’t use more than 32-bit integers, which would especially hurt performance on an 8-bit microcontroller.
Rather than go into the FIR filter code to explain how fixed point works, let me first use a simple example. While the FIR filter algorithm does complex filtering with many polynomials, we could implement a simple filter that outputs the same input signal but -6dB down or half its amplitude. In floating point terms, this would be a simple one tap filter to multiply each incoming sample by 0.5. To do this in fixed point with 16 bit precision, we would need to convert 0.5 into its 16-bit fixed point representation. A value of 1.0 is represented by 1 * (216) or 65,536. Anything less than 65536 is a value less than 1. To create a fixed point integer of 0.5, we simply use the same formula 0.5 * (216), which equals 32,768. Now we can use that value to lower the amplitude by .5 of every sample input. For example, say we input into our simple filter a sample with the value of 10. The filter would calculate 10 * 32768 = 327,680, which is the fixed point representation. If we no longer care about preserving the precision after the calculations are performed, it can easily be turned back into a non-fixed point integer by simply right shifting by the number of bits of precision being used. Thus, 327680 >> 16 = 5. As you can see, our filter changed 10 into 5 which of course is the one half or -6dB we wanted out. I know 0.5 was pretty simple, but if you had wanted 1/8 the amplitude, the same process would be used, 65536 * .125 = 8192. If we input a sample of 16, then 16 * 8192 = 131072, now change it back to an integer 131072 >> 16 = 2. Just to demonstrate how you lose the precision when turning back to integer (the same as going float to integer) if we input 10 into the 1/8th filter it would yield the following, 10 * 8192 = 81920 and then turning it back to integer would be 81920 >> 16 = 1, notice it was 1.25 in fixed point representation.
Getting back to the FIR filters, I picked 16 bits of precision, so I could have a fair amount of precision but balanced with a reasonable amount of whole numbers. Normally, a signed 32-bit integer can have a range of - 2,147,483,648 to +2,147,483,647, however there now are only 16 bits of whole numbers allowed which is a range of -32,768 to +32,767. Since you are now limited in the range of numbers you can use, you need to be cognizant of the values being fed in. If you look at the FEPFilter_get function, you will see there is an accumulator variable accZ which sums the values from each of the taps. Usually if your tap history values are 32 bit, you make your accumulator 64-bit to be sure you can hold the sum of all tap values. However, you can use a 32 bit value if you ensure that your input values are all less than some maximum. One way to calculate your maximum input value is to sum up the absolute values of the coefficients and divide by the maximum integer portion of the fixed point scheme. In the case of the FEP FIR filter, the sum of coefficients was 131646, so if the numbers can be 15 bits of positive whole numbers + 16 bits of fractional numbers, I can use the formula (231)/131646 which gives the FEP maximum input value of + or - 16,312. In this case, another optimization can be realized which is not to have a microcontroller do 64-bit calculations.
Before walking through the processing chain, we should discuss delays caused by filtering. Many types of filtering add delays to the signal being processed. If you do a lot of filtering work, you are probably well aware of this fact, but, if you are not all that experienced with filtering signals, it’s something of which you should be aware. What do I mean by delay? This simply means that if I put in a value X and I get out a value Y, how long it takes for the most impact of X to show up in Y is the delay. In the case of a FIR filter, it can be easily seen by the filter’s Impulse response plot, which, if you remember from my description of FIR filters, is a stream of 0’s with a single 1 inserted. T-Filter shows the impulse response, so you can see how X impacts Y’s output. Below is an image of the FEP’s high pass filter Impulse Response taken from the T-Filter website. Notice in the image that the maximum impact on X is exactly in the middle, and there is a point for each tap in the filter.
Below is a diagram of a few of the FEP’s high pass filter signals. The red signal is the input from the accelerometer or the newest sample going into the filter, the blue signal is the oldest sample in the filter’s ring buffer. There are 19 taps in the FIR filter so they represent a plot of the first and last samples in the filter window. The green signal is the value coming out of the high pass filter. So to relate to my X and Y analogy above, the red signal is X and the green signal is Y. The blue signal is delayed by 36 milliseconds in relation to the red input signal which is exactly 18 samples at 2 milliseconds, this is the window of data that the filter works on and is the Finite amount of time X affects Y.
Notice the output of the high pass filter (green signal) seems to track changes from the input at a delay of 18 milliseconds, which is 9 samples at 2 milliseconds each. So, the most impact from the input signal is seen in the middle of the filter window, which also coincides with the Impulse Response plot where the strongest effects of the 1 value input are seen at the center of the filter window.
It’s not only a FIR that adds delay. Usually, any filtering that is done on a window of samples will cause a delay, and, typically, it will be half the window length. Depending on your application, this delay may or may not have to be accounted for in your design. However, if you want to line this signal up with another unfiltered or less filtered signal, you are going to have to account for it and align it with the use of a delay component.
I’ve talked at length about how to get to a final solution and all the components that made up the solution, so now let’s walk through the processing chain and see how the signal is transformed into one that reveals the punches. The FEP’s main goal is to remove bias and create an output signal that smears across the bursts of acceleration to create a wave that is higher in amplitude during increased acceleration and lower amplitude during times of less acceleration. There are four serial components to the FEP: a High Pass FIR, Attenuator, Rectifier and Smoothing via Sliding Window Average.
The first image is the input and output of the High Pass FIR. Since they are offset by the amount of bias, they don’t overlay very much. The red signal is the input from the accelerometer, and the blue is the output from the FIR. Notice the 1g of acceleration due to gravity is removed and slower changes in the signal are filtered out. If you look between 24,750 and 25,000 milliseconds, you can see the blue signal is more like a straight line with spikes and a slight ringing on it, while the original input has those spikes but meandering on some slow ripple.
Next is the output of the attenuator. While this component works on the entire signal, it lowers the peak values of the signal, but its most important job is to squish the quieter parts of the signal closer to zero values. The image below shows the output of the attenuator, and the input was the output of the High Pass FIR. As expected, peaks are much lower but so is the quieter time. This makes it a little easier to see the acceleration bursts.
Next is the rectifier component. Its job is to turn all the acceleration energy in the positive direction so that it can be used in averaging. For example, an acceleration causing a positive spike of 1000 followed by a negative spike of 990 would yield an average of 5, while a 1000 followed by a positive of 990 would yield an average of 995, a huge difference. Below is an image of the Rectifier output. The bursts of acceleration are slightly more visually apparent, but not easily discernable. In fact, this image shows exactly why this problem is such a tough one to solve; you can clearly see how resonant shaking of the base causes the pattern to change during punch energy being added. The left side is lower and more frequent peaks, the right side has higher but less frequent peaks.
The 49 value sliding window is the final step in the FEP. While we have done subtle changes to the signal that haven’t exactly made the punches jump out in the images, this final stage makes it visually apparent that the signal is well on its way of yielding the hidden punch information. The fruits of the previous signal processing magically show up at this stage. Below is an image of the Sliding Window average. The blue signal is its input or the output of the Rectifier, and the red signal is the output of the sliding window. The red signal is also the final output of the FEP stage of processing. Since it is a window, it has a delay associated with it. Its approximately 22 samples or 44 milliseconds on average. It doesn’t always look that way because sometimes the input signal spikes are suddenly tall with smaller ringing afterwards. Other times there are some small spikes leading up to the tall spikes and that makes the sliding window average output appear inconsistent in its delay based on where the peak of the output shows up. Although these bumps are small, they are now representing where new acceleration energy is being introduced due to punches.
Now it’s time to move on to the Detection Processor (DET). The FEP outputs a signal that is starting to show where the bursts of acceleration are occurring. The DET’s job will be to enhance this signal and employ an algorithm to detect where the punches are occurring.
The first stage of the DET is an attenuator. Eventually, I want to add exponential gain to the signal to really pull up the peaks, but, before doing that, it is important to once again squish down the lower values towards zero and lower the peaks to keep from generating values too large to process in the rest of the DET chain. Below is an image of the output from the attenuator stage, it looks just like the signal output from the FEP, however notice the signal level peaks were above 100 from the FEP, and now peaks are barely over 50. The vertical scale is zoomed in with the max amplitude set to 500 so you can see that there is a viable signal with punch information.
With the signal sufficiently attenuated, it’s time to create the magic. The Magnitude Square function is where it all comes together. The attenuated signal carries the tiny seeds from which I’ll grow towering Redwoods. Below is an image of the Mag Square output, the red signal is the attenuated input, and the blue signal is the mag square output. I’ve had to zoom out to a 3,000 max vertical, and, as you can see, the input signal almost looks flat, yet the mag square was able to pull out unmistakable peaks that will aid the detection algorithm to pick out punches. You might ask why not just use these giant peaks to detect punches. One of the reasons I’ve picked this area of the signal to analyze is to show you how the amount of acceleration can vary greatly as you can see the peak between 25,000 and 25,250 is much smaller than the surrounding peaks, which makes pure thresholding a tough chore.
Next, I decided to put a Low Pass filter to try to remove any fast changing parts of the signal since I’m looking for events that occur in the 2 to 4 Hz range. It was tough on T-Filter to create a tight low pass filter with a 0 to 5 Hz band pass as it was generating filters with over 100 taps, and I didn’t want to take that processing hit, not to mention I would then need a 64-bit accumulator to hold the sum. I relaxed the band pass with a 0 to 19 Hz range and the band stop at 100 to 250 Hz. Below is an image of the low pass filter output. The blue signal is the input, and the red signal is the delayed output. I used this image because it allows the input and output signal to be seen without interfering with each other. The delay is due to 6 sample delay of the low pass FIR, but I have also introduced a 49 sample delay to this signal so that it is aligned in the center of the 99 sample sliding window average that follows in the processing chain. So it is delayed by a total of 55 samples or 110 milliseconds. In this image, you can see the slight amplification of the slow peaks by their height and how it is smoothed as the faster changing elements are attenuated. Not a lot going on here but the signal is a little cleaner, Earl Muntz might suggest I cut the low pass filter out of the circuit, and it might very well work without it.
The final stage of the signal processing is a 99 sample sliding window average. I built into the sliding window average the ability to return the sample in the middle of the window each time a new value is added and that is how I produced the 49 sample delayed signal in the previous image. This is important because the detection algorithm is going to have 2 parallel signals passed into it, the output of the 99 sliding window average and the 49 sample delayed input into the sliding window average. This will perfectly align the un-averaged signal in the middle of the sliding window average. The averaged signal is used as a dynamic threshold for the detection algorithm to use in its detection processing. Here, once again, is the image of the final output from the DET.
In the image, the green and yellow signals are inputs to the detection algorithm, and the blue and red are outputs. As you can see, the green signal, which is a 49 samples delayed, is aligned perfectly with the yellow 99 sliding window average peaks. The detection algorithm monitors the crossing of the yellow by the green signal. This is accomplished by both maximum and minimum start guard state that verifies the signal has moved enough in the minimum or maximum direction in relation to the yellow signal and then switches to a state that monitors the green signal for enough change in direction to declare a maximum or minimum. When the peak start occurs and it’s been at least 260ms since the last detected peak, the state switches to monitor for a new peak in the green signal and also makes the blue spike seen in the image. This is when a punch count is registered. Once a new peak has been detected, the state changes to look for the start of a new minimum. Now, if the green signal falls below the yellow by a delta of 50, the state changes to look for a new minimum of the green signal. Once the green signal minimum is declared, the state changes to start looking for the start of a new peak of the green signal, and a red spike is shown on the image when this occurs.
Again, I’ve picked this time in the recorded data because it shows how the algorithm can track the punches even during big swings in peak amplitude. What’s interesting here is if you look between the 24,750 and 25,000 time frame, you can see the red spike detected a minimum due to the little spike upward of the green signal, which means the state machine started to look for the next start of peak at that point. However, the green signal never crossed the yellow line, so the start of peak state rode the signal all the way down to the floor and waited until the cross of the yellow line just before the 25,250 mark to declare the next start of peak. Additionally, the peak at the 25,250 mark is much lower than the surrounding peaks, but it was still easily detected. Thus, the dynamic thresholding and the state machine logic allows the speed bag punch detector algorithm to “Roll with the Punches”, so to speak.
总而言之,我们在本文中讨论了很多这次技术众包背后的细节。首先,充分分析问题并确定需求的重要性,因为它涉及到所需项目的努力方向以及最终顺利完成它所需的各领域知识。第二,对于这种性质的问题,创建验证环境来构建算法是势在必行的,并且在这种情况下,它是具有基于Java技术搭建的数据可视化处理的原型。第三,是在最终的目标机器和环境中实现,在一台PC上你有优秀的经过优化的编译器和强大的CPU及缓存,但对于微控制器,针对目标机器的优化是留给你的工作。使用您知道的每个优化技巧,以保证微控制器能尽可能快地处理数据。第四,迭代开发可以帮助你解决这样的问题:在尝试中摸着石头过河,边学边做边验证。
当我回顾这个众包项目,让我来探讨成功的终极奥秘,我想主要是两点。为这项工作创造正确的工具是无价的。能够直观看到算法作用后的信号结果的能力是无价的。不仅绘制输出信号,而且它实时绘制,使我完全理解生成的加速度信息背后的因素。就好像Nate在练习拳击,而我正在看着我的屏幕上的波形信息。然而,根本的因素是,我意识到这是一个每秒发生2到4次的事情。我坚定信念并不懈地追求如何将原始的输入信号转换成显示这些事件的东西。我并没有完全指望让Google找到答案。请谨记:知识不是来自书中,它只是被记录在书中。首先,总要有人抛下书本去发现一些新东西,然后它才成为知识。使用你已掌握或者可以获取的知识,但不要害怕使用你的想象力来尝试之前没有尝试过的未解问题。所以记住,比方说,当你来到铺砌的道路的尽头,你是转过头找一条别人已经铺好的路,还是说在岔路口,继续前进,发现属于自己的新路。我不能只是谷歌如何用加速度计记录拳击速度沙袋被击打次数,但现在你们就可以了。
原始文章采用CC BY-SA 4.0,您可以自由地:
本文由翻译美国开源硬件厂商Sparkfun(火花快乐)的相关教程翻译,原始教程采用同样的CC BY-SA 4.0协议,为便于理解和方便读者学习使用,部分内容为适应国内使用场景稍有删改或整合,这些行为都是协议允许并鼓励的。
原始文章及相关素材链接:
https://www.sparkfun.com/news/2115?_ga=1.105187979.946766378.1445226389#requirements https://learn.sparkfun.com/tutorials/lessons-in-algorithms
这里面的画面都是真实的! 用心设计的科学道具让已经很刺激的电子音乐不只被听见了,还可以更直接的“被看见”,视觉与听觉上都非常的奥妙美丽,令 人惊艳。来自新西兰的音乐家Neil Stanford这次与导演Shahir Daud合作为他的单曲Cymatis拍摄了一部画面十分特殊的MV,其中运用了许多真实的物理现象,让画面呈现一种未来却奇幻的 感官刺激。
一种使声音看的见的技术,藉由震动沙子或液体让声波可视化,从达芬奇开始就不断有跨界的科学家与艺术家在探索这种来自大自然的“共振”之美。现在技术更是可以 多元应用,拿来分析复杂的声波,解读海豚的语言,追朔并重新创造浑然天成的艺术图腾,整个过程就像透过一颗魔法水晶球去观察未知世界一样迷人。
分解动作,学霸们请接招:
第一个实验,是克拉尼金属板实验,通过扬声器联合金属板,观察细沙在金属板上的形状。音频没毫秒的变化非常迅速,在调整选择好频率之后,因此最终决定选择四个频率(657Hz, 1565Hz, 932Hz, 3592Hz),用简单的合成技术进行音效的混合。
第二个实验是软管实验,他们一开始在Stanford的浴室做了拍摄水流的试验,最后发现25赫兹的音频与同等数值的帧率拍摄到的旋转水流正好有理想的半径,和几乎静止的效果。软管和架子鼓以及扬声器和低音炮进行技术性的连接,在水通过螺旋状的软管冻结的瞬间,摄像机以25Hz的帧速率通过,进行记录。为了防止水的漫溢,制作团队的要不断在现场进行水流排出工作。
第三个实验是扬声器盘实验,Stanford一开始以为这个试验与平板类似,后来才发现要拍到水纹的状态最重要的是音频与拍摄帧率的配合。他们尝试了各种不同的液体,才发现冻伏特加酒会显得更有质感。将注入液体的大盘子放在扬声器(音频在50Hz和100Hz)的顶部。在经过多项实验后,选择了冰冻伏特加作为注入液体,因为这种液体带来的厚重感恰到好处。
第四个实验是铁磁流体实验,铁磁流体很酷的地方在于它能形成有尖利菱角的球体,但是这个过程比较花时间,跟音乐的曲调、节奏都很难合起来。所以他们最后把音乐节奏跟电磁铁开关同步起来,让我们看到不同音调在铁磁流体中制造的回声。选择大小合适的磁体,能够以足够的速度形成所需刺状形态。在磁场关闭后,通过将液态磁体在盘中的回声效应,带动的涟漪诠释声音的形状。
第五个实验是静电球,这群人一开始想定做一个直径4英尺的超大静电球,但没有厂家能保证电流能靠节奏的起停来控制,所以他们最后买了一个20美金的玩具静电球来操纵。
第六个实验是鲁本管实验,导演Shahir想到了用焰管的主意。当音频与焰管共振的频率相当时,管子里的液化气形成压力高低不同的区域,从而影响火焰的高低。最后的装置让Shahir想到教堂里的管风琴,所以Stanford决定在这一段采用管风琴声。在金属管内充满可燃气体,与扬声器连接,不同的音频会形成不同的压力波,进一步再通过压力波作用来影像火焰的高低形状。
第七个实验是特斯拉线圈实验,16000伏的特斯拉线圈制造出了小火花的背景音,但是却没能达到Stanford预期的那种每一次打鼓都有闪电的效果。那个在特斯拉装置周围蹦跳的家伙,全身穿着超过32公斤重的法拉第导电服装,这是他可以制造出惊人的闪电,又不会被电死的原因。最后一幕他脚下的那道电,让整个团队都觉得圆满了。以特拉斯线圈产生的高压电弧在空气中的形状诠释声音的形状,片中的演奏者被特制的全金属外套包裹(类似等电位服的工作原理可能就是一种东西吧),确保人体不会被电伤,同时在鼓棒上缠绕导线来带动电流,形成超现实的美感。
将有趣的科学实验设计成一场秀,让「科学」和「音乐」两个看似南辕北辙的议题,透过「影像艺术」这座桥梁搭上线。 更有价值,更值得被记忆,高速摄影的慢速美感每一刻都叫人屏息,让看不见的大自然奥秘透过各种方法展现他原生的美丽。
在先前分享了复变函数的课程,有暨大(暨南大学)的同学开始向我提出新的需求,有没有好的线性代数课程的开放课和大家分享。我于是问他:是什么专业的、为什么会提这样的要求?他们说是学电子科学与技术专业的,当时的线性代数只上到一半,大三后无论是机器人运动模型还是电子技术专业课程都需要用到很多还没讲的矩阵的知识。
我立刻就想到了MIT课程中著名的Gilbert Strang教授的线性代数课程。与国内广泛采用的同济教材不同的是,这门课并不类似于国内纯粹注重推导和演算的训练,能让你看到数学的美。我相信采用同济教材的很大一个原因是好教,好上课,尤其是学时被严重压缩的今天,更重要的是考研的指定性。以下列出了相关材料,网易公开课已经全部翻译了双语字幕:
麻省理工公开课:线性代数
http://open.163.com/special/opencourse/daishu.html
麻省理工学院公开课:MIT线性代数习题课
http://open.163.com/special/opencourse/mitxianxingdaishuxitike.html
使用教材:
精装(美版):
平装(国际版,仅限美国以外地区使用):
很遗憾,平装的国际版似乎已不再出版,只有精装版可供选购
https://www.amazon.cn/dp/0030105676/ref=sr_1_6?ie=UTF8&qid=1518417929&sr=8-6&keywords=Gilbert+Strang
如果你觉得这些都太复杂,并能接受中国普通高校的上课模式,还有一个不错的选择,就是施光燕的线性代数,相关书籍和优酷视频播单如下:
优酷播单:
http://list.youku.com/albumlist/show/id_2260348.html?sf=10700&spm=a2h0k.8191403.0.0
使用教材:
https://www.amazon.cn/线性代数/dp/B0080B7914/ref=sr_1_1?ie=UTF8&qid=1478526705&sr=8-1&keywords=线性代数+施光燕
接下来介绍今天主推的重点,还是老生常谈,引用能引导学生独立思考的林秀豪教授的应用数学课-线性代数部分。
台湾国立清华大学物理系特聘教授
学历:美国加州大学圣塔芭芭拉分校物理博士
经历:
国家(台湾)理论科学研究中心科学家(2006~2008)
美国理论物理中心(圣塔芭芭拉)访问研究员(2004)
*台湾国立清华大学95、99、102杰出教学奖
*教学网站:http://hsiuhau.wikispaces.com/
*授课领域:统计力学、普通物理、 热统计物理、
热物理、应用数学、多体物理
这门课是给物理系学生上的,叫应用数学,所采用的教科书如下所示,这是本非常容易读懂的数学书籍,非常适合本科在高等数学(或微积分)后需要继续提升的理工类学生,书中内容几乎涵盖了微积分外所有常用理工应用场景的数学知识。
购买网址:
我并不支持采用盗版的行为,但是显而易见的是由于国内外收入差距的巨大,这种价格的教科书显然不是普通中国学生的能力承受范围内的。因此,为了满足可能的强烈要求,只能把PDF也附在这里,本文视频只讲到书中的第三章。
网盘分享链接:http://cloud.189.cn/t/FbmIZrYZjqau
我专门节选了其中讲述复变函数的部分,并全部把视频上传到优酷。一共18讲,看了你就知道,林秀豪教授的课程确有独到之处...废话到此为止,接下来的都是彩蛋和福利:
官方给出的讲义:
教学视频:
最近有同学和我反映说,学校的复变函数课很渣。学不好不是他的主观态度所致,而是任课老师课堂质量的客观现实。问我有什么解决办法,我和大家说,现在网上的开放式课程OCW和慕课Moocs如此发达,学习一门课程就像是打工一样,东家不打打西家。哪些学校哪个老师这门课讲得最好,网上一搜就知道啦。
不过很快,他们又向我反应,并没有复变函数这门课似乎并没有什么特别好开放式课程,我听完后顿感吐血,如此广泛被理工类专业使用的数学课程在今天这个时代怎么可能会缺乏好的学习素材呢?除了特殊的技巧之外,我发现现在普遍检索信息的能力不足问题反映在大学生身上也比较严重,他们普遍不知道MIT等高校主推的edX计划。由于MIT opencourseware的传统,很多的视频都会托管在Youtube等天朝无法访问的网站上,众所周知,不能“科学的访问互联网”也是困扰很多人的一个问题。
因为我本科在校时学的是机电专业,当年并没有学过这门课程,我们学校的安排是,直接跳过这门课程学了积分变换。现在工科类专业普遍采用的是西安交大的这本教材。
为了弥补我重新自学这门课程的愿望,我决心帮他们找一找,后来我真就发现中国大陆这边有一个还可以的视频教程,就是多年前中央广播电视大学,东北师范大学肖荫庵教授讲授的《复变函数》,现在优酷等网站上还有不少播单,比较容易找到。相关的配套教材和习题解答也比较容易找到。现公布如下:
优酷视频播单:
http://list.youku.com/albumlist/show/id_1033410.html?sf=10201&spm=a2h0k.8191403.0.0
所采用教材
https://www.amazon.cn/gp/product/B002G9T7UC/ref=ox_sc_act_title_4?ie=UTF8&psc=1&smid=A1AJ19PSB66TGU
不过这次,我决定不敷衍,要来点特别的干货。中国高校的数学课模式应该是从前苏联学来的,听起来总是有一种让人想要昏昏欲睡的节奏,不仅看不到使用场景,有些理论推导也不甚明晰。而纯英语教学的数学课程让不少学生望而生畏。对于偏理论的课程,创元素内部向来有阅读外文书,并定期进行读书会讨论,交流观点的传统。因此,我思索了很久,把我大学时期曾经听过他大学物理(也叫普通物理学)的老师开放式课程找了出来,就是下面这位:
台湾国立清华大学物理系特聘教授
学历:美国加州大学圣塔芭芭拉分校物理博士
经历:
国家(台湾)理论科学研究中心科学家(2006~2008)
美国理论物理中心(圣塔芭芭拉)访问研究员(2004)
*台湾国立清华大学95、99、102杰出教学奖
*教学网站:http://hsiuhau.wikispaces.com/
*授课领域:统计力学、普通物理、 热统计物理、
热物理、应用数学、多体物理
这门课是给物理系学生上的,叫应用数学,所采用的教科书如下所示,这是本非常容易读懂的数学书籍,非常适合本科在高等数学(或微积分)后需要继续提升的理工类学生,书中内容几乎涵盖了微积分外所有常用理工应用场景的数学知识。
购买网址:
我并不支持采用盗版的行为,但是显而易见的是由于国内外收入差距的巨大,这种价格的教科书显然不是普通中国学生的能力承受范围内的。因此,为了满足可能的强烈要求,只能把PDF也附在这里,本文视频只讲到书中的第二章和第十四章。
网盘分享链接:http://cloud.189.cn/t/FbmIZrYZjqau
我专门节选了其中讲述复变函数的部分,并全部把视频上传到优酷。一共17讲,看了你就知道,林秀豪教授的课程确有独到之处...废话到此为止,接下来的都是彩蛋和福利:
官方只给出了第一部分的三个讲义:
Complex numbers – the basics (Sec 1-5)
Complex series (Sec 6-10)
Complex functions (Sec 11-16)
CH14.1 Introduction
CH14.2 Analytic Functions
CH14.2 Analytic Functions
CH14.3 Contour Integrals
CH14.3 Contour Integrals
CH14.4 Laurent Series
CH14.4 Laurent Series
CH14.5 The Residue Theorem
CH14.6 Methods of Finding Residues
CH14.7 Evaluation of Definite Integrals By Use of the Residues Theorem
CH14.7 Evaluation of Definite Integrals By Use of the Residues Theorem
CH14.7 Evaluation of Definite Integrals By Use of the Residues Theorem
CH14.7 Evaluation of Definite Integrals By Use of the Residues Theorem