I have a C++ application that uses NI-DAQmx Base to read up to 5 input signals at 1Khz each. Using simple sequence
- DAQmxBaseCreateTask()
- DAQmxBaseCreateAIVoltageChan()
and then DAQmxBaseStartTask()
while reading all the samples, I also run a simple loop that sends TTL signals to devices, and reads input from another TTL source. Also, images are displayed on the computer screen with a specific timing that is managed by the CPU clock.
my question is : how can I synchronize the CPU clock with the Ai samples ? When I start the task, what is the delay before the first sample is taken ? How can I optimize this as to minimize that latency (if any) ? I just want to know what the Ai value was, for instance in the interval [-100, 100] msec around a specific moment of the CPU, and there should be as little delay as possible
thanks !