When processing of large VCD input files was spread across multiple
parse_contents() invocations, the resulting sigrok stream of sample data
had gaps in them and total timing was off. For instance 74ms of input
data were interpreted as spanning some 600ms or 300ms, depending on the
number of channels in the input stream.
Move the "previous timestamp" variable to the input module context. This
eliminates the inappropriate gaps and fixes the translation of VCD file
timestamps to sigrok sample numbers.
This fixes bug #1075.
../src/input/wav.c: In function ‘send_chunk’:
../src/input/wav.c:200:2: warning: ‘memset’ used with length equal to number of elements without multiplication by element size [-Wmemset-elt-size]
memset(fdata, 0, CHUNK_SIZE);
^~~~~~
../src/input/raw_analog.c: In function ‘init’:
../src/input/raw_analog.c:133:31: warning: ‘%d’ directive output may be truncated writing between 1 and 10 bytes into a region of size 6 [-Wformat-truncation=]
snprintf(channelname, 8, "CH%d", i + 1);
^~
../src/input/raw_analog.c:133:28: note: directive argument in the range [1, 2147483647]
snprintf(channelname, 8, "CH%d", i + 1);
^~~~~~
../src/input/raw_analog.c:133:3: note: ‘snprintf’ output between 4 and 13 bytes into a destination of size 8
snprintf(channelname, 8, "CH%d", i + 1);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When the processing of columns of text lines detected errors, the loop
was aborted and the routine was left, but allocated resources were not
freed. Fix the remaining memory leaks in the error code paths.
The constant at the top of the source file is the number of samples in a
datafeed submission chunk. The previous implementation erroneously made
it the size in bytes. There is no need to round down the buffer size
according to the unit size.
The previous implementation sent one sigrok session datafeed packet per
processed CSV line. This is rather inefficient for the CSV input module,
and triggers a dramatic performance loss in the srzip output format.
Communicate up to 128K samples within one datafeed packet. This fixes
bug #695.
Factor out repeated calculation of the unit size which is derived from
the channel count. Fix a minor memory leak in an error path while we are
here. (Other memory leaks in rare error paths remain with this commit.)
Suggested-By: Elias Oenal <sigrok@eliasoenal.com>
On the Windows platform it appears to be popular to _not_ terminate the
very last line in a text file. Which results in an unmet constraint in
the CSV input module and an internal exception in PulseView which aborts
program execution.
Cope with the absence of the text line termination sequence at the very
end of the input stream. Keep all other checks in place, such that only
completely received text lines get processed.
This fixes bug #635.
"Document" the current state of the implementation in the CSV input
module's source code. Discuss how text handling is non-trivial, which
approaches are available and how they have drawbacks.
Mention the lack of support for the import of analog data as well.
The CSV input module supports variable length end-of-line encodings
(either CRLF, or CR, or LF). When a bunch of accumulated text lines got
processed, do skip the corresponding number of characters after the end
of the last processed line.
This fixes one of the issues discussed in bug #635.
The input module runs receive() and end() invocations which end up
calling process_buffer(). It's perfectly legal to call the process
routine with an empty accumulation buffer, especially when the process
routine was called from end().
This fixes a condition where PulseView raised a fatal error at the end
of a completed successful import.
Reported-By: Sergey Alirzaev <zl29ah@gmail.com>
Move an independent test for single/multi column operation out of a code
path that checked for and then processed text lines. This commit does
not change behaviour, but prepares a subsequent commit.
Factor out a magic string literal which held a delimiter set yet could
be mistaken for an (assumed) fixed termination string. Concentrate the
determination of the end-of-line text encoding as well as the resulting
set of possible deliminters in one nearby location. The symbolic name
for the delimiter set eliminates the doubt on its purpose.
The default so far was 0, which meant there would be no significant
digits at all, yielding results that looked strange/wrong to the user.
Long-term all remaining drivers should be fixed to use the actual,
correct digits and spec_digits values according to the device's
capabilities and/or datasheet/manual. Until that is done, a default
of digits=2 is used as a temporary workaround.
This fixes the remaining parts of bug #815.
Some of the standard helper functions take a log prefix parameter that is
used when printing messages. This log prefix is almost always identical to
the name field in the driver's sr_dev_driver struct. The only exception are
drivers which register multiple sr_dev_driver structs.
Instead of passing the log prefix as a parameter simply use the driver's
name. This simplifies the API, gives consistent behaviour between different
drivers and also makes it easier to identify where the message originates
when a driver registers sr_dev_driver structs.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
This makes the code shorter, simpler and more consistent, and also
ensures that the (same) debug messages are always emitted and the
packet.payload field is consistently set to NULL always, etc.
According to the infos I have, VCD files should be plain ASCII, but we
got report of a version adding a UTF8 BOM at the beginning of the file,
so we need to skip it.
This fixes bug #755.
Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
Accept (and set to default) 0 for numchannels which means 'unlimited'.
I think it is convenient to read all channels of a vcd file by default.
Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
Benefits:
* only channels really used in vcd will be added
* we can give them the proper name found in the vcd file
* less code :)
Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
Current code moves the identifier string one byte to the front to
overwrite the bit value, so 'tokens[i]' is a string to compare against
the desired value. This copying is unnecessary, just pass a properly
setup pointer.
Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
No need to bail out on vectors. As long as there are tokens left, we can
try to parse the rest. Also, print a message so the user knows what's
going on. Here is a testcase vcd:
$timescale 1 ns $end
$var wire 1 n0 addr_0 $end
$var wire 1 n1 addr_1 $end
$enddefinitions $end
#0
0n0
b1 n1
#1
1n0
b0 n1
#2
0n0
b1 n1
Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
If we hit the missing identifier case, then we reached the end of the
token list. So, we should break out of the loop, and not continue.
Otherwise we will go past the end of the array as this minimal testcase
shows:
$timescale 1 ns $end
$var wire 1 n0 addr_0 $end
$enddefinitions $end
1
gives:
$ ./sigrok-cli -I vcd -i no_mod.vcd -O vcd -o /tmp/o.vcd
Segmentation fault
Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
A $var block can have an optional index item which looks like '[<sth>]'.
Parse it, too, and append it to the channel name.
This fixes bug #322. A first version was posted by Simon Richter. This
version is rebased and simplified.
Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
As we're downsampling, several record time stamps can match the specified
trigger time. For this reason, it's possible that several trigger packets
are sent when a file is loaded. This prevents the issue and sends a
trigger packet only on the first matching record.