How to calibrate an iPhone ?
Hi,
I've just bought the SoundMeter app for my iPhone 3G. When I switch it on, it weights about 52.0 or so (dB ? the main number...), even if it is in a silent room where there is obvioulsy not so many dB. As soon as I speak normaly, it jumps to 90 or more. I have maybe a strong voice, but not so much !
How can I calibrate that app ?
I've just bought the SoundMeter app for my iPhone 3G. When I switch it on, it weights about 52.0 or so (dB ? the main number...), even if it is in a silent room where there is obvioulsy not so many dB. As soon as I speak normaly, it jumps to 90 or more. I have maybe a strong voice, but not so much !
How can I calibrate that app ?
0
Comments
"Silent" rooms often exhibit higher sound levels than you might expect, so the 52 dB value doesn't seem unrealistic. However, speaking normally should not produce a 90 dB reading unless you have your mouth very close to the microphone.
For both Lp and Leq measurements, the displayed value is given in dB referenced to 20 micropascals, by definition.
Calibration of the built-in mic will need to be performed relative to a separate calibrated sound level meter.
You might find this post helpful.
Ben
I've been looking at some SLM apps for iOS and Android and roughly speaking I found 2 methods:
[li]"trimming": a phone-model-specific constant is added or subtracted from the measured value, e.g. 78dB (measured) -3 = 75dB (displayed); or 55 (measured) + 6 = 61 (displayed)[/li] [li]"scaling": the dB value is multiplied or divided by a phone-model-specific scale factor, e.g. 70 (measured) * 1.05 = 73.5 dB (displayed)[/li] [li](one Android app used a combination of both approaches)[/li]
I was wondering which approach is used in SoundMeter (and in your dB app?) and what your thoughts are on this subject.
Regards,
gorecki.
Adding to or subtracting from a dB value is the same as multiplying or dividing the value upon which the dB value is based. Decibels (dB) simply provide a means of expressing values on a logarithmic scale.
For example: 1 Volt = 0 dBV, 10 V = 20 dBV, 100 V = 40 dBV, etc. (dBV = 20*log10(V), referenced to 1 Volt)
So, multiplying the raw voltage value by 10 is the same as adding 20 dB to the decibel value (dBV, in this case).
In other words, scaling and trimming are different ways to accomplish the same task (although it should be noted that your description of scaling is incorrect--the dB value is not scaled, but it's non-dB counterpart is scaled before being converted to a dB value).
Faber apps scale raw input signals based on measured (calibrated) sensitivity values and then calculate levels in dB. In SignalScope Pro, this allows for consistency between the various tools. For example, you can look at the time waveform in the oscilloscope tool with instantaneous amplitude expressed in pascals (Pa). The level meter tool will show the level in dB SPL (relative to 20 micropascals, rms) that is consistent with the time signal levels you can see in the oscilloscope.
Ben
Thanks for clarifying.
I knew about decibels being a logarithmic representation of the ratio between a measured pressure (or voltage) value and a reference value. And I also got that adding/subtracting from a dB value (i.e. "trimming") is equivalent to multiplying/dividing (i.e. scaling) the underlying pressure or voltage value.
However, I really did find some Android apps (at least 2) which applied calibration by scaling dB values themselves. When I filled in a calibration value of 2 (the default being 1), the dB readings in those apps exactly doubled. Another Android app I found explicitly mentioned in the UI that calibration was achieved as follows: dB' = (dB x scale) + trim.
I am not saying that these are smart ways of doing it, only that it is being done. It is very likely that these apps are made by amateurs how do not know much about acoustics (while you obviously do).
So just so that I understand correctly, when you say your apps "scale raw input signals based on measured (calibrated) sensitivity values and then calculate levels in dB", does this mean that, before the dB value is calculated, the (voltage) signal is multiplied/divided by a constant device/model-specific scale factor? For example, on an iPhone1 you multiply the signal by a constant X, on an iPhone3G by a constant Y, on an iPhone3GS by a constant Z, etc.? And X, Y and Z were probably determined experimentally by comparing iphones against reference equipment?
Regards,
gorecki.
Buyer beware of the various sound level meter apps out there. Some of them show in their screenshots a max sound level that exceeds the peak level. Conventional (even standardized) use of the terms max and peak in sound level measurements will never yield a max sound level that is greater than the peak. So, that is an immediate indication that a an app may not be reliable.
Regarding Faber apps, the input signal is multiplied by a constant scale factor, or sensitivity. The default sensitivity values are device-indpendent (i.e. they're the same for all iOS devices). This is because Apple's devices have been fairly consistent in their treatment of the audio input signals. If a user needs a more accurate sensitivity value, the Calibration screen within the app will let them specify an arbitrary sensitivity or automatically calculate a sensitivity based on a known input level.
Ben