怎样读取实时语音数据?(通过麦克风)

collecte 2003-06-02 09:52:04
需要实时转发到其他电脑上播放。
...全文
551 7 打赏 收藏 转发到动态 举报
写回复
用AI写文章
7 条回复
切换为时间正序
请发表友善的回复…
发表回复
reedseutozte 2003-06-02
  • 打赏
  • 举报
回复
http://www.experts-exchange.com/Programming/Programming_Languages/Delphi/Q_10070431.html
collecte 2003-06-02
  • 打赏
  • 举报
回复
这个地址怎么打不开啊?还有其他的地址吗?
传输用什么呢(哪个控件,UDP还是TCP呢?)
reedseutozte 2003-06-02
  • 打赏
  • 举报
回复
http://www.geocities.com/macrotech_tr/mmedia/sound.html
一般录音会开两个临时缓冲区,
要实现是时传输,可以在响应WIM_DATA消息的函数中将刚刚录满数据的缓冲区数据发过去
reedseutozte 2003-06-02
  • 打赏
  • 举报
回复
Nows a good time to put your audio data into the memory block - if youre playing audio.

Header:=new(PwaveHdr);
with header^ do
begin
lpdata:=pointer(memBlock);
dwbufferlength:=memblocklength;
dwbytesrecorded:=0;
dwUser:=0;
dwflags:=0;
dwloops:=0;
end;


Except for setting the pointer to a block of memory, and the length of the block of memory, all the other fields should be set to zero - unless you
want to play the same block of data multiple times.

The next step is to prepare the data, why this is necessary, I dont know!

i:=waveOutPrepareHeader(HWaveOut^,Header,sizeof(TWavehdr));
if i<>0 then
application.messagebox('Out Prepare error','error',mb_ok);

i:=waveInPrepareHeader(HWaveIn^,Header,sizeof(TWavehdr));
if i<>0 then
application.messagebox('In Prepare error','error',mb_ok);


Then we need to send the new block of data to the audio device - either to be played or to be recorded on.

i:=waveOutWrite(HWaveOut^,Header,sizeof(TWaveHdr));
if i<>0 then
application.messagebox('Wave out error','error',mb_ok);

i:=waveInAddBuffer(HWaveIn^,Header,sizeof(TWaveHdr));
if i<>0 then
application.messagebox('Add buffer error','error',mb_ok);


The final thing to do when recording sound is to start!

i:=waveInStart(HwaveIn^);
if i<>0 then
application.messagebox('Start error','error',mb_ok);


There are also commands to stop and pause recording.

Messages

If youre using messages to control the recording an playback of audio, then you need to have some handlers for the messages. The handlers
should be something like

procedure

MMOutOpen(var msg: Tmessage); message MM_WOM_OPEN;
procedure MMOutClose(var msg: Tmessage); message MM_WOM_CLOSE;
procedure MMOutDone(var msg: Tmessage); message MM_WOM_DONE;
procedure MMInOpen(var msg: Tmessage); message MM_WIM_OPEN;
procedure MMInClose(var msg: Tmessage); message MM_WIM_CLOSE;
procedure MMInDone(var msg: Tmessage); message MM_WIM_DATA;


WOM messages are send by audio out devices, and WIM messages are sent by audio in devices.

The open and close messages are sent when the device is either opened or closed (closing is covered in the next section) - these are not really
very useful messages to trap. The important messages are the DONE and DATA.

MM_WOM_DONE tells you that the block of data that you were playing has been played, and you should now get rid of it.
MM_WIM_DATA tells you that the block of data has been recorded on and you should now deal with it as appropriate. For both messages,
youll probably want to send some more data to the audio device.

The first thing to do with your returned data is to unprepare it, a pointer to the header that identifies the block is passed as the lparam of the
message.

Header:=PWaveHdr(msg.lparam);
i:=waveOutUnPrepareHeader(HWaveOut^,Header,sizeof(TWavehdr));
if i<>0 then
application.messagebox('Out Un Prepare error','error',mb_ok);

Header:=PWaveHdr(msg.lparam);
i:=waveInUnPrepareHeader(HWaveIn^,Header,sizeof(TWavehdr));
if i<>0 then
application.messagebox('In Un Prepare error','error',mb_ok);


You can then do as you will with the block of data, but remember to dispose of any memory that you dont want any more.

dispose(Header^.lpdata);
dispose(Header);


Disposing of the evidence.

Once weve finished with the soundcard, we need to get rid of the handle to audio device. Before we can do that, we need to reset the device so
that any unused buffers are returned to the application for disposal.

if HWaveOut<>nil then WaveOutReset(HWaveOut^);
if HwaveOut<>nil then WaveOutClose(HWaveOut^);

if HwaveIn<>nil then WaveInReset(HWaveIn^);
if HwaveIn<>nil then WaveInClose(HWaveIn^);


Notice that the multimedia subsystem sometimes requires a pointer to a handler, and sometimes a handler - just one of those things.

A problem youll find if you try this code, is that the reset is asynchronous so you reset the audio device, and close it - then your handler gets
called with a block of data that needs to be unprepared, of course the handle you had is invalid now, and so you get a GPF if you use it (well you
dont seem to but Im not prepared to take that kind of a risk). So to get around this count the number of packets sent to the audio device, and
only execute the close when there are no remaining packets - you can do this in the handler that deals with returned packets of data.

The example program

The listing is a complete program for recording audio and playing it directly back to the speaker - an echo! To use it make a blank form, with an
On Create and a Close Query handler, then replace the entire unit with the code in the listing. Its written for Delphi 1, but I would expect it to
work with Delphi 2.

Note: The code given uses a sampling rate of 11,000 samples per second; not all sound cards can support this rate. If your sound card does not
support it, then you will need to adjust the rate to 11,025 samples per second, which should be supported.

Conclusion

Using the sound card at a low level to record and playback audio, feels like it should be really complex, but in fact it isnt thats the power of the
Windows API. However there are a number of quirks in it, but its fun!

sample source : www.undu.com/DN970901/00000038.htm

Regards, Zif.
reedseutozte 2003-06-02
  • 打赏
  • 举报
回复
jhui,

didn't you asked a question before? (a day ago?) And without any notification about the comments, just deleted it??? Here, we all like what people think of our comments, bad or good.

Anyway, here is your answer :
(by Dr Darryl Gove - D.J.Gove@open.ac.uk)

article from UNDU :

.

Do you have a soundcard?

The first thing that youll need to do when writing an audio application is to handle the possibility that the computer the application is running on
does not have a soundcard.

To do this we need to use the calls WaveOutGetNumDevs and WaveInGetNumDevs - they return the number of audio playing devices (Out)
and the number of recording devices (In). Most of the time there will be one of each - one soundcard.

if WaveOutGetNumDevs=0 then application.messagebox('Error', 'No sound playing card', mb_OK);
if waveInGetNumDevs=0 then application.messagebox('Error','No recording sound card',mb_ok);


The best place for this code would probably be in the On Create handler for the form, however before you compile the code, make sure that you
have included mmsystem in the uses list.

What kind of sound do you want?

You should be aware that there is a variety of options for sound quality - whether it is mono or stereo, 8 or 16 bit, and the sampling frequency.
You need to ask the soundcard whether it supports the format.

The basic wave format information is handled by the TWaveFormat block

TWaveFormat = record
wFormatTag: Word; {format type}
nChannels: Word; {number of channels 1 for mono 2 for stereo}
nSamplesPerSec: Longint; {sample rate}
nAvgBytesPerSec: Longint; {number of bytes per second recorded}
nBlockAlign: Word; {size of a single sample}
end;


However, you wont directly use this, since you need to use a wrapper which relates to the particular format that you want to store the data in -
basically you dont only ask Do you support this sample rate etc? but ask Do you support this way of saving the data?. The only really supported
format is PCM, but potentially there could be other formats supported by the multimedia subsystem, and you as a programmer would not need
to worry about them.

In order to ask about PCM data, you need to use the TPCMWaveFormat block:

TPCMWaveFormat = record
wf: TWaveFormat;
wBitsPerSample: Word;
end;


Which is just the TWaveFormat block with an additional word telling the computer the bits per sample (8,16, or 32). Each sample (the number
of samples per second is the sampling frequency) is either 8 bit or 16 bit, and either stereo or mono - so the smallest size is 8 bit mono or 8 bits
per sample, and the largest size is 16 bit stereo or 32 bits per sample. This calculation follows directly from the data you specify in the
TWaveFormat block - so besides questions of why do you need to tell the computer again, well have a look at setting up the TWaveFormat
block:

WaveFormat:=new(PPCMwaveFormat);
with WaveFormat^.wf do
begin
WFormatTag := WAVE_FORMAT_PCM; {PCM format - the only option!}
NChannels:=1; {mono}
NSamplesPerSec:=11000; {11kHz sampling}
NAvgBytesPerSec:=11000; {we aim to use 8 bit sound so only 11k per second}
NBlockAlign:=1; {only one byte in each sample}
waveformat^.wBitsPerSample:=8; {8 bits in each sample}
end;


So weve set up the type of audio we want to record, the next thing to do is to ask the soundcard if it can do it.

i:=waveOutOpen(nil,0,PWaveFormat(WaveFormat),0,0,WAVE_FORMAT_QUERY);
if i<>0 then
application.messagebox('Error', 'Play format not supported', mb_OK);

i:=waveInOpen(nil,0,PWaveFormat(WaveFormat),0,0,WAVE_FORMAT_QUERY);
if i<>0 then
application.messagebox('Error', 'Record format not supported', mb_OK);


Getting a handle on it.

Like most things in Windows, we end up referring to the soundcard using a handle; we need one handle to record and one to playback.

Having set up our WaveFormat block, we can ask for a handle to a device that can either play or record that format.

HwaveOut:=new(PHwaveOut);
i:=waveOutOpen(HWaveOut,0,Pwaveformat(WaveFormat),form1.handle,0,CALLBACK_WINDOW);
if i<>0 then
application.messagebox('Error', 'Problem creating play handle', mb_OK);

HwaveIn:=new(PHwaveIn);
i:=waveInOpen(HWaveIn,0,Pwaveformat(WaveFormat),form1.handle,0,CALLBACK_WINDOW);
if i<>0 then
application.messagebox('Error', 'Problem creating record handle', mb_OK);


In this instance, were going to use the messages to handle the playback and recording of audio. We could use a callback function. To use
messages, we need to pass the handle of a window that will receive the messages, and the CALLBACK_WINDOW value to tell the multimedia
subsystem that were passing a handle to it.

Being prepared

The final thing to do is to start either playing sound or recording sound. To do this we need to send packets of memory to the sound card either
to play or to record on.

When you send data out to be played, the playing starts immediately you add a packet of data, extra packets of data are added to a queue, and
played in sequence. If youre recording then the blocks of memory are once again added to a queue - but they are not recorded on until you tell
the computer to start recording. If the computer runs out of packets to record on then the recording stops.

So the first thing to do is to get a block of memory and to set up the data block that will tell the multimedia subsystem about it.

Tmemblock=array[0..memblocklength] of byte;
memBlock:=new(PmemBlock);
reedseutozte 2003-06-02
  • 打赏
  • 举报
回复
这个可以,不过是e文,是说如何通过microphone录音的
Ask:
Hi! I am a college student and working with an audio compression project. Actually I am having 2 great problems on my project. They are recording and playing audio in Delphi. So please help me to solve the first problem, recording sound from microphone.

I am having a sound blaster 16 sound card in my PC. I hope that I can get 8bit mono PCM data from it, and then I will compress the data and save it to disk.

Can you show me how can I get the raw data from sound card?

Thanks you very much.

regards,
James.

collecte 2003-06-02
  • 打赏
  • 举报
回复
奇怪了,怎么这个也打不开啊?
reedseutozte(haha) 你自己试过吗?
这次给大家介绍的是基于SensorTile物联网开发套件,DIY制作一个无线麦克风。 先上效果图: 说明: 图中的效果是SensorTile录音,通过BLE传输到手机实时播出,可以开始和停止,声音请自行脑补。前面讲的几个例子都是基于例程中FeatureListActivity类的,这样小型的修改是可以的,但是如果要做一个大型项目那就行不通了。 这次就以无线麦克风为例给大家介绍如何新建一个Activity,关于语音播放的详细的介绍可以参考附件内容中关于简单的语言交互(3)--录音,播放,存储介绍。 1. 烧录BM2固件,使用BlueMS获取BlueVoice的LICENSE。 2. 打开ST提供的例程,新建AudioActivity.java文件extends AppCompatActivity implements View.OnClickListener。 3. 在AndroidManifest.xml中注册该Activity: 4. 新建activity_audio.xml的布局文件,里面放置两个按钮: 5. 在AudioActivity中写好启动接口: 6. 写好FeatureListener,用来处理接收的数据: 7. 在AudioActivity类中的onCreate中读取和保存状态,绑定布局文件,初始化mNode 和mAudioTrack如下: 8. 在onStart中注册前面的mAudioListener,使能FeatureAudioADPCM类和FeatureAudioADPCMSync类,开启播放。 9. 后退函数onBackPressed: 10. 退出函数onStop,停止播放,注销语音类: 11. 按钮响应函数onClick,播放按钮就是onStart中的内容,停止按钮就是onStop中的内容: 12. 启动该AudioActivity: 可以直接从ScanActivity中启动,也可以从其他类中启动,本例从FeatureListActivity中的按钮中启动。 好啦,现在全部工作就完成了,后期可以在此基础上进行语音识别,大家敬请期待。

1,183

社区成员

发帖
与我相关
我的任务
社区描述
Delphi Windows SDK/API
社区管理员
  • Windows SDK/API社区
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧