(parse_contents): Do not call sr_dbg() on every signal change.
This would be excessive even for sr_spew().
(read_until): Do not call ftell() just to be able to show some
number in a debug message later on.
These commands are superfluous and do not seem to make sense in
the context they were used. Also, $dumpvars was missing an $end,
and $dumpoff was used without any content.
Avoid writing a new timestamp for every changed signal if multiple
signals change state simultaneously. Also, keep signal transitions
on the same line with their timestamp to make the output easier to
inspect in a text editor. (VCD does not care whether newlines or
spaces are used to separate tokens.)
(receive): Use probe index for sample byte selection too, not just
for bit selection. Also simplify the indexing expressions a bit.
This fixes the problem of incorrect output for probes indices 8 to
31.
Also, use double rather than float in the timestamp calculation,
and format the result directly as floating point number rather than
converting it back to uint64_t.
Additionally, make sure that the state of all signals is written
for the very first sample in the stream. This fixes the problem
that signals would be displayed as indeterminate until the first
change.
(context.samplecount): Make the sample counter part of the context
struct, rather than keeping it as a global static.
Allow the edge of an external clock input to be configured by
means of an SR_CONF_CLOCK_EDGE configuration setting. This is
a string option with the same format as SR_CONF_TRIGGER_SLOPE.
Implement the configuration setting TRIGGER_SOURCE with the
choices CH (logic channels) and TRG (external trigger input).
Also implement the TRIGGER_SLOPE setting for selecting the
edge to trigger on (rising or falling).
It turns out that all LWLA protocol responses consist either
of 32-bit units or of 32-bit units combined into 64-bit units.
Thus it makes sense to double the basic unit size for reading
from 16 bit to 32 bit.
We cannot do the same for command messages though, as those
actually do use 16-bit quantities in some places, and 32-bit
arguments are not always aligned to 32-bit boundaries.
(acquisition_state.xfer_buf_in): Change unit type to uint32_t,
and update related macros and code accordingly.
(LWLA_TO_UINT32): New macro to replace LWLA_READ32, operating
directly on 32-bit values instead of pointers to 16-bit units.
Make use of a compiler-recognized idiom for bitwise rotation
to efficiently swap the 16-bit halves of a 32-bit word.
(LWLA_TO_UINT16): New macro to replace LWLA_READ16.
(LWLA_READ64): Remove unused macro.
(LWLA_WORD_[0123]): Slightly simplify 16-bit word extraction.
The return code SR_ERR_ARG is intended for reporting unsupported
or inapplicable device configuration settings and is not a hard
error. In order to indicate failure of internal sanity checks,
use SR_ERR_BUG instead.
Without the cast non integer frequencies weren't possible (e.g. with a sampling
frequency of 50Hz we would end up with a signal frequency of 2Hz instead of
2.5Hz). The result were signals which had an incorrect number of samples per
period.
BugLink: http://sigrok.org/bugzilla/show_bug.cgi?id=297
Drivers interpreted the uint64 values to the SR_CONF_TRIGGER_SLOPE
configuration setting in different ways. In order to orthogonalize
the API, change the type of the setting to a string with the same
format as uses for logic probes.
Modify the bitstream loading routine to work directly with the
Raw Binary Files (.rbf) generated by Altera tools. Previously,
a custom format was used which was basically an RBF preceded by
a 4-byte header specifying the transfer length.
The *.sr (libsigrok session) file format has changed since the last
libsigrok release. Frontends using older libsigrok versions will not
be able to read *.sr files created by frontends using the new file format.
Thus, bump the version number of the file format to 2.
Current libsigrok will read both version 1 and version 2 files
correctly, and always write version 2 files.
Move pre-acquisition hardware setup to the new config_commit()
callback. At the moment, the only setting applied at commit
time is switching the clock source, which involves uploading
a new bitstream to the FPGA.
Move setup of channels and trigger masks to the new probe
configuration callback. Although the actual hardware setup
still happens just before acquisition, the new approach
already has the advantage that invalid settings are caught
early.
Also, it turns out that the LWLA1034 allows triggering on
channels which are not enabled for data acquisition. This
feature is now supported as well.
Apparently, frontends may call scan() more than once to accumulate
multiple devices, so do not reset the instance list pointer at the
start of each scan. Also, number devices continuously across scans.
This change moves the handling of series differences out to the points in the
code where they actually matter, unifying the overall structure of the code.
It also adds new VS5000/DS1000 series equivalents for commands that were
previously only implemented on the later models.
After this change, trigger waiting and the 'Memory' data source are supported
on the VS5000/DS1000 series.