Hi Rob,
HID programming in .NET means a lot of interop, async operations and other “nice” things
Or in other words - it’s nothing you would do as a “beginners project”.
Anyhow - above I wrote that we decided to build our own wrapper - one of the reasons was timing.
BUT - there are a few things you should think about:
a.) especially the 8055 is not a real fast card (with the 8061 it’s much better)
b.) when we talk about “fast” this is a very relative thing (time is always relative as Albert Einstein proofed ).
So if you use the (I talk about the 8055) velleman DLL you will loose some time or better described - your app has to wait some time for a response.
This is what we avoid (due to full async handling).
To come to relative time - what are we talking about?
We talk about waiting for (let’s assume) 5 to 10 ms.
Or in other words - you make a read and this costs you 5ms - with 100 Reads this would mean half a second.
AFAIK the 8055 has a “read rate” of 40ms (I think I read something like this in the documentation)
- so 40ms is the shortest time you can get results.
Which means about 20 reads per second.
If you use this read frequency you will “lose” - a thent of a second in your app.
Divided up to 20 pieces of 5ms.
I’m very sure that no user will have problems with this - this will not result in any “UI delays” (hanging menus or so).
The “problem” will occure on measurement precission - “aquisition rate” (I hope this makes sence - in german I would call it “Datenerfassungsrate”).
BUT - this has nothing to do with the velleman DLL - it depends on the hardware.
So the limit is not the 5ms “UI blocking” - it is the hardware.
5ms means nothing for a UI - it is pretty fast - you won’t see it.
If this “limitation” is to much for the solution you have to think about another hardware (see below)
What I want to say: the velleman DLLs are OK in the most cases of use.
The negatives: no direct .NET support, some calls are blocking, “difficult” multi board handling.
And about “another hardware” - we had once a very interesting project.
There was an existing machine cutting section tubes with a disk saw.
The problem - if the saw blade becomes bluntish you have to change it (for sharpening).
So the goal was to find out the optimal time when a blade must be changed.
You can’t stop the machine for a check (the tubes are “running out” of the machine - stop means shutdown of the whole system).
So the idea was to measure the current used while cutting.
If the blade becomes blunt - the current will raise.
The process runs like - tube running by (a few seconds) - CUT (about 1 second) - tube running by…
Since tubes have different shapes you can’t say – oh, 7.6 A currency - change blade.
What we had to do was to build a graph of the cutting process.
The blade hit’s on corner - a bit higher currency - the blade hits more corners - higher currency - the blade cut’s the last part of the tube - lower currency.
The final approach was - (learning phase).
The operator uses a new blade to cut profile X.
He tells our system - this is “very OK”.
Than he cuts the same tube with a blunt blade - and tells our system - this is the limit.
Later we check the curves - detect the type of tube cutted - and check if the blade is OK.
To come to a point - to achive this goal we had to get as much measurement points as possible to have “detailed graphs”.
Our first approach - to take a digital sensor failed - the conversion / transmission time was too long.
The final result was a “CPU Board” with two high speed analog converters which measure (overlapped) the current taken by the motor.
–last not least we also measured some other things (blade speed, blade vibration and so on) - but this does not matter.
The “CPU Board” get’s a signal “Cutting starts” - and from that moment on it does nothing than collect values and buffer them.
–interrupts has bee turned of and other things were done to optimize read rate.
AFTER the cut - the board signaled the PC - I have data - and (using the delay to the next cut) it transfers the whole data to the PC system.
What I wanted to tell you with this story - don’t take too much care about timings - if it comes to “very fast reads” the hardware (analog convertors, sensors,…) will be the limit.
And if you talk about “…to not miss any short pushes of a pushbutton…” you talk finally about (relative) lengthy processes.
Even the debounce time (needed for a push button) comes to the ms range
Assume a “very fast button pusher” - how often can he hit the button per second?
10 Times? - that means 100ms per push.
We talk about 5ms delay - 10ms read rate - so the user could press the button about 100 times per second before you reach the situation where you will loose data (oversee a push).
By the way - this user will be most of the time at the doctor - recreating his “button finger”
I hope my thoughts could help you a bit.
Regards
Manfred