This results in vertical alignment of sample data and trigger positions.
The implementation assumes that the channel names' byte count corresponds
to the space which they occupy on screen. Channel names with umlauts still
may suffer from misalignment.
[ gsi: rephrased commit message ]
For pure analog acquisition without logic data the ZIP creation code
path resulted in a division by zero. Skip the bytes to samples math in
that case. How to reproduce:
$ sigrok-cli -d demo:logic_channels=0:analog_channels=1 --samples 20 -o file.sr
Avoid a dependency on malloc(0) behaviour while we are here. Add a
warning on data feed submitter implementation issues, to not silently
drop the data, which could surprise users. This ShouldNotHappen(TM) for
correct implementations where channel counts and unit size agree, but
was observed with incomplete out-of-tree implementations. Eliminate
a data type redundancy in another malloc() call.
Quite some drivers flush the serial port after opening it. And quite
some don't although they should. Factor this out, so serial_open() will
always flush the port. The removal in the drivers was done with this
small coccinelle script:
@@
struct sr_serial_dev_inst *serial;
@@
serial_open(serial, ...)
... when != serial
- serial_flush(serial);
and then the results and the unmatched findings of serial_flush() were
audited.
Signed-off-by: Wolfram Sang <wsa@kernel.org>
Eliminate the dependency on the non-portable bt_put_le16() routine. It
isn't available in all supported BlueZ versions, and won't be available
in other platform backends. Prefer write_u16le() which is available on
all sigrok supported platforms.
Only return OK from the format match routine when either of the tested
conditions reliably matched. Return an error in all other cases. This
avoids that the Saleae module is "winning a contest" due to even the
weakest condition, and then is not able to handle the input file.
Start the implementation of an input module which covers Saleae Logic's
export files. CSV and VCD are handled by other modules, this one accepts
binary exports for Logic1 digital data (every sample, and when changed),
Logic1 analog data, Logic2 digital data, and Logic2 analog data.
The newer file format versions contain header information and can get
auto-detected, the older formats require a user spec. Some of the file
formats lack essential information in the file content, thus require
another user spec (samplerate for digital data is an example).
The .logicdata file format is unknown, and is not supported. The .sal
format could get added later, but requires local file I/O in the input
module, which current common infrastructure does not provide.
Extend the common set of endianess conversion helpers. Cover readers and
writers for little endian single and double precision and 64bit integer
values, including support to advance the read/write position.
Rephrase the default value for the 'skip' option and the detection of a
user specified value. This is tricky because: Sample numbers are kept in
64bit values. Skip and downsample are fed to formulae so we want them
both to be unsigned. Yet absence of a user spec as well as possible user
values 0 and positive must be told apart. Use all-ones for the default
of "-1" which translates to "first timestamp", users need not be able to
specify that negative value.
Make sure to only downsample 'skip' values when the user specified some.
Which avoids the undesired insertion of huge idle gaps at the start of
the capture. An earlier implementation had to check for -1, this recent
version uses an unsigned number in combination with a boolean flag to
achieve this.
Reword some diagnostics messages, and print the samples count between
timestamps while we are here. Add a check for successful text to number
conversion of timestamp values.
How to reproduce:
$ pulseview -i file.vcd
$ pulseview -i file.vcd -I vcd:downsample=5
$ pulseview -i file.vcd -I vcd:skip=111381600
Example file:
$timescale 1 ns $end
$scope module top $end
$var wire 1 ! d1 $end
$upscope $end
$enddefinitions $end
#111381815
0!
#111381905
1!
#111381990
0!
#111382075
Accumulate samples from multiple session feed packets before sending
them off to ZIP archive operations. This improves throughput for those
setups where acquisition devices or input modules provide only few
samples per session feed send call.
This version also splits large packets from applications into smaller
ZIP members (if the application's packet size is larger than the output
module's local buffer size). If that is not desired, the implementation
needs adjustment to immediately pass larger blocks to ZIP operations
(after potentially flushing previously queued data) instead of looping.
This fixes bug #974.
Extend and rephrase the VCD output module, to support mixed signal data,
support higher channel counts, and address other minor issues.
Increase the number of VCD identifiers which can get generated. Bump the
limit from 94 to 18346 channels. Prefer single letter names for backwards
compatibility for the first channels. Use two or three letter identifiers
as needed for higher channel counts.
Add support for analog channels, and carefully organize a queue such
that timestamps and their data only get written after input data for
_all_ channels was received from the session feed. Provide IEEE754
double precision values for maximum compatibility with other VCD aware
software, although sigrok internally passes analog data with single
precision. This makes potential later adjustment transparent to external
software.
Factor out and rephrase code while we are here. This implementation
avoids glib calls where they'd hurt performance. A local pool reduces
malloc() pressure to increase throughput. String manipulation is tuned
for simplicity and reduced cost. Special code paths were added to tune
the use cases where mixed signals are not involved (immediate write to
the output text, bypassing the output module's local queue).
An srzip input implementation detail still makes the VCD output consume
lots of memory during merge sort of channels' data. See bug #1566.
Other nits got addressed in bypassing: Adjust data types. Separate the
gathering of detail information and the construction of the VCD header
text to simplify review and future maintenance. Skip VCD identifiers for
disabled channels. Emit a final timestamp to flush the last sample, and
communicate the total capture length.
Update comments. Update the copyright for recent non-trivial changes.
Extend and rework the VCD input module: accept more data types, improve
usability, fix known issues.
Add support for bit vectors (arbitrary width), multi-bit integer values
(absolute 64bit width limit, internal limitation to single precision),
and floating point numbers ('real' in VCD, single precision in sigrok).
Unfortunately sigrok neither has concepts of multi-bit logic channels
nor IEEE-1364 stdlogic values, the input module maps input data to
strict boolean and multiple logic channels. A vector's channels are
named and grouped to reflect their relation. VCD 'integer' types are
mapped to sigrok analog channels. Add support for scoped signal names,
and the re-use of one VCD signal name for multiple variables.
Rework file and text handling. Only skip pointless UTF-8 BOMs before
file content (not between sections). Handle lack of line termination at
the end of the input file. Process individual lines of input chunks,
avoid glib calls when they'd result in malloc pressure, and severely
degrade performance. Avoid expensive string operations in hot loops.
Rearrange the order of parse steps, to simplify maintenance and review:
end of section, new section, timestamp, data values, unsupported. Flush
previously queued values in the absence of a final timestamp. Unbreak
$comment sections in the data part. Apply stricter checks to input data,
and propagate errors. Avoid silent operation (weak warnings can go
unnoticed) which yields results that are unexpected to users. Unbreak
the combination of 'downsample' with 'skip' and 'compress'. Reduce noise
when users limit the number of channels while the input file contains
more data (keep a list of consciously ignored channels). Do warn or
error out for serious and unexpected conditions.
Address minor issues. Use common support for datafeed submission. Keep
user specified options across file re-load. Fixup data type nits, move
complex code blocks into separate routines. Sort the groups of routines,
put helpers first and concentrate public routines at the bottom. Extend
the builtin help text. Update comments, update the copyright for the
non-trivial changes.
Fixes bug #776 by adding support for bit vectors.
Fixes bug #1476 by flushing most recently received sample data.
Input modules often find themselves in the situation where sample data
was received and could be sent to the session bus, but submission should
get deferred to reduce the number of send calls and provide larger data
chunks in these calls. Introduce common support code for buffered sample
data submission (both logic and analog), provide a simple alloc, submit,
flush, and free API.
The input/binary module chops raw input data into chunks and sends these
to the session feed. The total size of input chunks got aligned to the
unit size, the session feed output didn't. Make sure to align session
packets with the input data's unit size, too.
This fixes bug #1582.
When using a number of frames that is not 1, the driver will read
samples up to its limit and then wait for another trigger. This will be
repeated until the configured number of frames has been finished.
This seems to make the Rigol DS1054Z work. It's still a bit janky --
on a live capture, sample 688 (zero-based) out of the 1200-sample
frame seems to consistently contain garbage. I'm not sure what's
going on.
The Rigol DS1054Z sometimes returns zero bytes in response to a bulk in
request. sigrok ends up reading out of bounds and failing ungracefully
when this happens. Check that libusb returned a full USBTMC header and
fail gracefully if it did not.
According to the programming manual, one should issue
:WAV:RES
:WAV:BEG
before reading data from internal memory. Without this, the wrong data
will be returned.
I want to fix this double-close issue I see with my OLS:
First close at the end of a 'scan':
sr: [00:00.045171] openbench-logic-sniffer: Got metadata key 0x00, metadata ends.
sr: [00:00.045178] openbench-logic-sniffer: Disabling demux mode.
sr: [00:00.045186] serial: Closing serial port /dev/ttyACM0.
Second one as part of hwdriver cleanup:
sr: [00:00.046088] hwdriver: Cleaning up all drivers.
sr: [00:00.046108] serial: Closing serial port /dev/ttyACM0.
sr: [00:00.046116] serial-libsp: Cannot close unopened serial port /dev/ttyACM0.
So, before closing a second time, check if the device is not idle.
I am optimistic this could fix bugs #1151 and #1275, too.
Signed-off-by: Wolfram Sang <wsa@kernel.org>
- always say 'ID' when the ID command failed
- print hexdump of a faulty ID because on a stalled device we may get
0x00 bytes which would terminate the string early.
Signed-off-by: Wolfram Sang <wsa@kernel.org>
This changeset adds support for the RDTech TC66C USB power meter.
Currently, the driver reports the following channels:
* V: VBus voltage
* I: VBus current
* D+: D+ voltage
* D-: D- voltage
* E: Energy consumed in threshold-based recording mode.
The number of significant digits shown for each channel has been set
to match the number of digits shown on the device.
Usage example:
sigrok-cli -d rdtech-tc:conn=/dev/ttyACM0 --scan
Known issues:
* BLE support is currently unimplemented. This uses a different
command set, but the same poll data format.
Kudos to Ben V. Brown for reverse engineering some of the protocol and
documenting the encryption key used for poll data.
Signed-off-by: Andreas Sandberg <andreas@sandberg.pp.se>
Being able to calculate a CRC16 is useful in multiple places, factor
this into a new module with CRC implementation. This module currently
only supports ANSI/Modbus/USB flavor of CRC16.
Signed-off-by: Andreas Sandberg <andreas@sandberg.pp.se>
This changeset adds support for the RDTech UMxx series of USB power
meters. The driver has been tested with the RDTech UM24C, but should
support the UM24C, UM25C, and the UM34C.
Currently, the driver reports the following channels:
* V: VBus voltage
* I: VBus current
* D+: D+ voltage
* D-: D- voltage
* T: Device temperature
* E: Energy consumed in threshold-based recording mode.
The number of significant digits shown for each channel has been set
to match the number of digits shown on a UM24C.
Missing features:
* There is currently no support for configuring threshold-based
recording from sigrok, but this can be done on the device itself.
* Fast charging mode currently not logged.
Usage example:
sigrok-cli -d rdtech-um:conn=bt/rfcomm/MAC --scan
sigrok-cli -d rdtech-um:conn=/dev/rfcomm0 --scan
Known issues:
* When using sigrok's Bluetooth transport implementation, the device
is disconnected between probing and sampling. Some devices (e.g.,
the UM24C), dislikes this and can't be reconnected reliably for
sampling. This is not an issue when setting up a rfcomm device
manually and using it as a serial port.
Kudos to Sven Slootweg for documenting most of the protocol.
Signed-off-by: Andreas Sandberg <andreas@sandberg.pp.se>
Many devices receive a struct with binary values when polled. Many of
these values will correspond channels in sigrok. This
change introduces helper functions for automatically reading and
scaling such values and sending them down a sigrok analog channel.
Signed-off-by: Andreas Sandberg <andreas@sandberg.pp.se>
The meter allows remote controlled start of recordings, but requires a
few parameters where it's uncertain how to most appropriately get these
by means of SR_CONF_* keys.
Introduce SR_CONF_SET support for SR_CONF_DATALOG to raise awareness,
but leave the implementation empty for now. Leave a TODO comment which
discusses the meter's commands that one might want to use from here.
Extend the previously introduced skeleton driver for UNI-T UT181A. Introduce
support for the full multimeter's protocol as it was documented by the ut181a
project. Which covers the retrieval of live readings, saved measurements, and
recordings, in all of the meter's modes and including relative, min/max, and
peak submodes. This implementation also parses compare mode (limits check)
responses, although it cannot express the result in terms of the session feed.
Announce the device as a multimeter as well as a thermometer, it supports
up to two probes including difference mode. When in doubt, prefer usability
over feature coverage (the driver side reflects all properties of the meter,
but not all features can get controlled by the driver). The probe routine
requires that users specify the serial port, and enable serial communication
on the meter.
Several TODO items remain. Comments in the driver code discuss limitations
of the current implementation, as well as cases where the meter's features
don't map well to sigrok's internal presentation. This implementation also
contains (optional, off by default) diagnostics for research on the serial
protocol.
When data patterns for trigger specs span multiple bits, users may not
want to specify long lists of "<ch>=<lvl>" conditions for sigrok-cli's
--trigger option, and count channels by hand. Or click a dozen dialogs
to specify one data pattern, or modify a previous specification. Setups
with few traces may accept that, "data heavy" setups like parallel data
or address bus inspection may not.
Add comments which discuss the potential use of SR_CONF_TRIGGER_PATTERN.
Outline a syntax which may be flexible enough _and_ acceptable to users,
support data patterns and edge triggers alike, in several presentations
that serve different use cases. This commit exclusively adds comments,
does not change behaviour.
Update a comment in the user spec to internal format trigger spec parser
to expand on hardware constraints and implementation limitations. Rename
an identifier which checks the number of edge conditions, not the number
of accepted trigger spec details.
Trigger support became operational again. Drop the compile time switch
which disabled the previously incomplete implementation.
This resolves bug #359.
Parse trigger specs early when acquisition starts, timeout calculation
needs to reflect on it. Either immediately start an acquisition timeout
for trigger-less configurations. Or prepare a timeout which spans the
post-trigger period, but only start its active period when the trigger
match was detected by the device's hardware.
Extend mode tracking during acquisition to handle other special cases.
Terminate acquisition when the user specified sample count limit exceeds
the hardware capacity, or when no limits were specified and the device's
memory is exhausted.
There is a slight inaccuracy in this approach, but the implementation
fails on the safe side. When both user specified limits and triggers are
involved, then at least the user specified time or sample count span is
provided. Usually more data is sent to the session feed, and all of the
requested period is covered. This is because of the software poll period
and the potential to start the timeout slightly late. As well as having
added some slack for hardware pipelines in the timeout calculation.
The previous implementation ran the complete sample memory retrieval
in a single call to the receive callback. Which in combination with
slow USB communication and deep memory could block application logic
for rather long periods of time.
Rephrase the download_capture() routine such that it can spread its
workload across multiple invocations. Run the acquisition stop and
resource allocation for the download, the interpretation of a set of
DRAM lines, and the resource cleanup, as needed. And keep calling the
download routine until completion of the interpretation of the sample
memory region of interest. The workload size per invocation may need
more adjustment.
The previous implementation could stall UI progress for some 20-30s.
This change lets users perceive UI progress while sample memory gets
retrieved and interpreted.
This resolves bug #1005.
Recent commits added "position tracking" for interesting spots in the
sample stream and the current iteration pointer. Which obsoletes the
counters for remaining items until trigger, the "triggered here" flags,
as well as the unfortunate "rewind a little" workaround which lacked a
comment on its motivation or implementation details.
The hardware provided trigger match location is inaccurate. Do check
sample values against the initial trigger condition spec for a short
range of the retrieved sample data, to refine the trigger marker's
position which is sent to the session feed.
Temporarily ignore the optional sample count limit for trigger-using
acquisitions, to reduce the diff size and simplify review. Since the
hardware transparently compresses sample data, we cannot reliably
determine where to start the download and interpretation of sample data,
and the submission to the session feed. Starting early in the sample
memory content, and sticking with the strict sample count limit, could
clip submission before the actual trigger position.
This implementation provides _at least_ the requested amount of data,
and does cover the spot of interest (the trigger position). This, and
the trigger support's having become operational again, is considered an
important improvement. The inaccuracy is considered acceptable for now.
Trigger-less acquisition does enforce the exact sample count limit.
Rephrase how the sample memory iteration position gets tracked, increment
after every event slot already. Update the "last seen sample" status more
often (an event slot can hold several sample items). Arrange for a period
of time where software will check sample data for trigger matches. This
improves the precision of the hardware provided trigger match location.
Do send hardware provided trigger locations to the session feed even if
the software check found no match on the data content. This covers user
initiated button presses (which can unblock the acquisition when the
application provided trigger condition never matches).
Note that this implementation does manage the window of supervision, but
does not yet check the sample values against the trigger condition. This
gets added later.
Factor USB data transfer out of the code path which interprets sample
memory content. Keep internal state of sample memory download in the
device context. This eliminates local variables, and ideally allows a
future implementation to spread chunked downloads across several read
callbacks, which would improve UI responsiveness.
Update comments while we are here. Address minor portability nits (ull
suffix vs UINT64_C macro). The inner loops (iterating clusters and their
events which contain individual samples) are not affected by this commit.
Further rephrase the sigma_write_trigger_lut() routine. It's helpful to
"think" in BE16 quantities to improve readability of LUT address and
parameter downloads. Better matches the vendor's documentation. Also use
a better name for the "trigger select 2" register content.
Start moving parameters into the device context which are related to the
interpretation of sample memory content. This can simplify error paths,
allow to release resources late. And ideally sample memory download and
interpretation could spread across several receive callbacks, improving
UI responsiveness. Also makes total and current dimensions available to
deeper nesting levels in the interpretation, which currently don't have
access to these details.
Create a sub struct in the device context which keeps those parameters
which are related to sample memory interpretation. Which also obsoletes
the 'state' struct and only leaves the 'state' enum as a remainder.
Use the "samples per event" condition instead of the samplerate when
extracting a number of samples from an event's storage. Rename the
de-interleaving routines to better reflect their purpose.
Mechanically adjust the add_trigger_function() routine to address nits,
attempt to improve maintainability.
Raise awareness of the fact that strict binary arithmetics is done (bit
operators are used), the strict 0..1 set of values needs to be enforced,
and mere "logical truthness" is not good enough in this spot. Explicitly
check for bit positions instead of "shifting out" the bit of interest
and have the 0/1 value result nearly by coincidence.
Extend comments. Group related instructions and separate them from other
groups. Reduce the scope of the rather generic i, j, tmp named variables
which are just too easy to get wrong.
Rename macros to better reflect which of them check a bit position, and
which span a bit field of given width. Adjust more call sites to use the
macros. This takes tedium out of maintenance as well as review. Has the
minor benefit of somewhat shortening text lines, and eliminating nested
parentheses (or getting perceived as if it would).
Move the acquisition limits related variables into a sub struct within
the device context. Over time they became numerous, and might grow more
in the future.
There are several separate conditions which the driver needs to tell
apart. There is a compile time switch whether trigger support shall be
built in. There is the condition whether acquisition start involved a
user provided trigger spec. And there is the hardware flag whether a
previously configured trigger condition matched and where its position
is.
Only accept user provided trigger specs when trigger support is builtin.
(The get/set/list availability and spec passing is done in applications
outside of the library, we better check just to make sure.) Only setup
the trigger related hardware parameters when a spec was provided. Only
check for trigger positions when the hardware detected a match.
Enable the compile time option which builds trigger support code into
the asix-sigma driver. This is a development hack. Trigger support in
the driver is incomplete and currently not operational.
Address remaining data type nits. Use more appropriate types for sizes
and counters and indices, as well as for booleans.
Prefer more verbose variable names in a few spots to avoid the rather
generic 'i' symbol, especially in complex code paths with deeply nested
flow control or with long distances between declaration and use. Re-use
an existing buffer in the acquisition start for command sequences which
setup trigger in/out as well as clock parameters.
Introduce some variables which may seem unnecessary. But these are
useful for research during maintenance.
This is a mechanical adjustment, behaviour does not change.
Introduce the required infrastructure to store successfully applied
configuration data in hardware registers. This lets the probe phase of
the next sigrok session pick up where the previous session left. Which
improves usability, and increases performance by eliminating delays in
the acquisition start, by not repeating unnecessary firmware uploads.
The vendor documentation suggests there would be FPGA registers that are
available for application use ("plugin configuration"). Unfortunately
experiments show that registers beyond address 0x0f don't hold the data
which was written to them. As do unused registers in the first page. So
the desirable feature is not operational in this implementation. There
could be different netlist versions which I'm not aware of, or there
could be flaws in this driver implementation. This needs more attention.
Stop assuming that C language variables whould have a specific memory
layout that applications could rely on. Use normal data types in higher
abstraction layers, drop non-portable bit fields. Use existing macros
for the creation of bit masks of a given width.
When application code used the common SW limits API, the call sequence
of init() then set() then check() already kept expiring, which is rather
unexpected. The timeout period should only start when start() is called,
check() should not signal expiration before the start() call.
The specific use case is the combination of an msecs timeout and capture
ratio when triggers are used. The post-trigger period only starts when
the trigger match was seen, even though its length is already known when
the acquisition starts. It's desirable to run the start() call for the
post-trigger timeout late, and not terminate the acquisition before the
trigger match.
The previous implementation considered both the UART bitrate and the
frame format mandatory, users had to specify "/8n1" as well just to
change the bitrate.
This commit makes the frame format optional, and defaults to 8n1 which
is so popular these days. When specified, the frame format still needs
to preceed the other optional flow and handshake flags, for maximum
backwards compatibility, and equal robustness in the process of parsing
serialcomm= specs.
Unfortunately device drivers cannot specify their preferred or default
UART frame format. Which means that users still need to provide these
when a device does not use 8n1. This is not a regression, the previous
implementation always needed the frame format spec.
Eliminate doubt from a comment on ASIX SIGMA's channel names which was
introduced in commit d261dbbfcc. Publicly available documentation does
agree their names start at "1" and go up to "16".
The 50MHz netlist supports the use of an external clock. Any of the 16
channels can use any of its edges to have another sample taken from all
the other pins. It's nice that the hardware does track timestamps, which
results in an exact reproduction of the input signals' timing with 20ns
resolution, although the clock is externally provided and need not have
a fixed rate.
Rephrase more parts of sigma_build_basic_trigger() to closer match the
vendor documentation. Use the M3Q name. Be explicit about "parameters"
setup (even if that means to assign zero values, comments help there).
Using three BE16 items for the parameters improves readability.
Rephrase the sigma_build_basic_trigger() and build_lut_entry() routines
to hopefully improve readability. Avoid the use of short and generic
identifiers which are just too easy to confuse with each other and the
1 literal and negation operator in deeply nested loops and complex
expressions that span several text lines. Reduce indentation where
appropriate. Concentrate initialization and use of variables such that
reviewers need less context for verification.
This is a purely mechanical change, the function of triggers remains
untested for now. Setting "selres" in that spot is suspicious, too.
Rephrase the sigma_write_trigger_lut() routine to work on "a higher
level" of abstraction. Avoid short and most of all generic variable
names. Use identifiers that are closer to the vendor documentation.
Reduce the probability of errors during maintenance, and also increase
readability. Replace open coded nibble extraction and bit positions by
accessor helpers and symbolic identifiers. Adjust existing math where it
did not match the vendor documentation. Always communicate 8bit register
addresses, don't assume that application use remains within a specific
"page". Provide more FPGA register access primitives so that call sites
need not re-invent FPGA command sequence construction. Remove remaining
open coded endianess conversion in DRAM access.
The 100/200MHz supporting FPGA netlists differ in their register set
from 50MHz netlists. Adjust the parameter download at acquisition start.
Raise awareness of the "TriggerSelect" and "TriggerSelect2" difference
(the former only exists in 50MHz netlists, the latter's meaning differs
between firmware variants). "ClockSelect" semantics also differs between
netlists. Stop sending four bytes to a register that is just one byte
deep, channel selection happened to work by mere coincidence.
Eliminate a few more magic numbers, unobfuscate respective code paths.
Though some questions remain (trigger related, not a blocker for now,
needs to get addressed later).
Make the list of supported samplerates an internal detail of the
protocol.c source file. Have the api.c source file retrieve the list
as well as the currently configured value by means of query routines.
Ideally the current rate could get retrieved from hardware at runtime.
A future driver implementation could do that. This version sticks with
the lowest supported rate, as in the previous version.
Sort "semi public" routines and "global data" of the asix-sigma driver
in the protocol.h header file by their use. Add comments. This improves
maintenance of the driver source.
Move all of the FTDI connection handling from api.c to protocol.c, and
prepare "forced" and "optional" open/close. This allows future driver
code to gracefully handle situations where FPGA registers need to get
accessed, while the caller may be inside or outside the "opened" period
of the session. This is motivated by automatic netlist type and sample
rate detection, to avoid the cost of repeated firmware uploads.
Detect more error conditions, and unbreak those code paths where wrong
data was forwarded. It's essential to tell the USB communication layer,
sigrok API error codes, and glib mainloop receive callbacks apart. Since
the compiler won't notice, maintainers have to be extra careful.
Rephrase diagnostics messages. The debug and spew levels are intended
for developers, but the error/warn/info levels will get presented to
users, should read more fluently and speak from the application's POV.
Allow long text lines in source code, to not break string literals which
users will report and developers need to search for (this matches Linux
kernel coding style).
This commit also combines the retrieval of sample memory fill level,
trigger position, and status flags. Since these values span an adjacent
set of FPGA registers. Which reduces USB communication overhead, and
simplifies error handling. The helper routine considers the retrieval
of each of these values as optional from the caller's perspective, to
simplify other use cases (mode check during acquisition, before sample
download after acquisition has stopped).
INIT pin sensing after PROG pin pulsing was reworked, to handle the
technicalities of the FTDI chip and its USB communication and the FTDI
library which is an external dependency of this device driver. Captures
of USB traffic suggest that pin state is communicated at arbitrary times.
Address minor style nits to improve readability and simplify review. The
sizeof() expressions need not duplicate data type details. Concentrate
the assignment to, update of, and evaluation of variables in closer
proximity to reduce potential for errors during maintenance. Separate
the gathering of input data and the check for their availability from
each other, to simplify expressions and better reflect the logic's flow.
Further "flatten" the DRAM layout's declaration for sample data. Declare
timestamps and sample data as uint16_t, keep accessing them via endianess
aware conversion routines. Accessing a larger integer in smaller quantities
is perfectly fine, the inverse direction would be problematic.
Keep application data in its logical presentation in C language struct
fields. Explicitly convert to raw byte streams by means of endianess
aware conversion helpers. Don't assume a specific memory layout for
C language variables any longer. This improves portability, and
reliability of hardware access across compiler versions and build
configurations.
This change also unobfuscates the "disabled channels" arithmetics in
the sample rate dependent logic. Passes read-only pointers to write
routines. Improves buffer size checks. Reduces local buffer size for
DRAM reads. Rewords comments on "decrement then subtract 64" during
trigger/stop position gathering. Unobfuscates access to sample data
after download (timestamps, and values). Covers a few more occurances
of magic numbers for memory organization.
Prefer masks over shift counts for hardware register bit fields, to
improve consistency of the declaration block and code instructions.
Improve maintenability of the LA mode initiation after FPGA netlist
configuration (better match written data and read-back expectation,
eliminate magic literals that are hidden in nibbles).
Move the 'devc' parameter to the front in routine signatures for the
remaining locations which were not adjusted yet. Reduce indentation of
continuation lines, especially in long routine signatures. Try to not
break string literals in diagnostics messages, rephrase some of the
messages. Massage complex formulae for the same reason.
Whitespace changes a lot, word positions move on text lines. See a
corresponding whitespace ignoring and/or word diff for the essence of
the change.
The driver got extended in a previous commit to accept any hardware
supported samplerate in the setter API, although the list call does
suggest a discrete set of rates (a subset of the hardware capabilities).
Update a comment to catch up with the implementation.
Drop the 250kHz item, it's too close to 200kHz. Add a 2MHz item to
achieve a more consistent 1/2/5 sequence in each decade. Unfortunately
50MHz and an integer divider will never result in 20MHz, that's why
25MHz is an exception to this rule (has been before, just "stands out
more perceivably" in this adjusted sequence).
Running several firmware uploads in quick repetition sometimes failed.
It's essential to stop the active netlist from preventing the FPGA's
getting reconfigured (FTDI to FPGA pins are so few, and shared). Delays
in a single iteration of the initiation sequence improves reliability.
Retries of the sequence are belt and suspenders on top of that.
Before the change, failure to configure was roughly one in ten. After
the change, several thousand reconfigurations passed without failure.
Use symbolic identifiers to select firmware images, which eliminates
magic 0/1/2 position numbers in the list of files, improves readability
and also improves robustness. Move 'devc' to 'ctx' and before other
arguments in routine signatures while we are here.
FPGA configuration (netlist upload) of ASIX SIGMA devices is rather
special a phase, and deserves its own state in the device context's
"state" tracking. Not only is the logic analyzer not available during
this period, the FTDI cable is also put into bitbanging mode instead
of regular data communication in FIFO mode, and netlist configuration
takes a considerable amount of time (tenths of a second).
Use common support for SW limits, and untangle the formerly convoluted
logic for sample count or time limits. Accept user provided samplerate
values when the hardware supports them, also those which are not listed.
The previous implementation mapped sample count limits to timeout specs
which depend on the samplerate. The order of applications' calls into
the config set routines is unspecified, the use of one common storage
space led to an arbitrary resulting value for the msecs limit, and loss
of user specified values for read-back.
Separate the input which was specified by applications, from limits
which were derived from this input and determine the acquisition phase's
duration, from sample count limits which apply to sample data download
and session feed submission after the acquisition finished. This allows
to configure the values in any order, to read back previously configured
values, and to run arbitrary numbers of acquisition and download cycles
without losing input specs.
This commit also concentrates all the limits related computation in a
single location at the start of the acquisition. Moves the submission
buffer's count limit container to the device context where the other
limits are kept as well. Renames the samplerate variable, and drops an
aggressive check for supported rates (now uses hardware constraints as
the only condition). Removes an unused variable in the device context.
Introduce a 4MiB session feed submission buffer in the device context.
This reduces the number of API calls and improves performance of srzip
archive creation.
This change also eliminates complex logic which manipulates a previously
created buffer's length and data position, to split the queued data when
a trigger position was involed. The changed implementation results in a
data flow from sample memory to the session feed which feels more natural
during review, and better lends itself to future trigger support code.
Use common SW limits support for the optional sample count limit. Move
'sdi' and 'devc' parameters to the front to match conventions. Reduce
indentation in routine signatures while we are here.
This implementation is prepared to handle trigger positions, but for now
disables the specific logic which checks for trigger condition matches
to improve the trigger marker's resolution. This will get re-enabled in
a later commit.
Add more symbolic identifiers, and rename some of the existing names for
access to SIGMA sample memory. This eliminates magic numbers and reduces
redundancy and potential for errors during maintenance.
This commit also concentrates DRAM layout related declarations in the
header file in a single location, which previously were scattered, and
separated registers from their respective bit fields.
Extend comments on the difference of events versus sample data.
Move the FPGA commands (which can access registers, and sample memory)
declarations before the register layout declaration. Which then no
longer separates the registers declarations from their bit fields.
Update comments on the register set while we are here.
Eliminate a few magic numbers in FPGA commands, use symbolic identifiers
for automatic register address increments, and DRAM access bank selects.
Improve grouping of related declarations in the header file.
Slightly rephrase and comment on the FPGA configuration of the ASIX
SIGMA logic analyzer. Use symbolic pin names to eliminate magic numbers.
Concentrate FPGA related comments in a single spot, tell the Xilinx FPGA
from FTDI cable (uses bitbang mode for slave serial configuration).
This fixes typos in the PROG pulse and INIT check (tests D5 and comments
on D6). Also removes the most probably undesired 100s timeout in the
worst case (100M us, 10K iterations times 10ms delay). Obsoletes labels
for error paths. Drops a few empty lines to keep related instruction
blocks together. Includes other style nits.
Stick with the FTDI library for data acquisition, and most of all for
firmware upload (bitbang is needed during FPGA configuration). Removing
this dependency is more complex, and needs to get addressed later.
Re-use common USB support during scan before open, which also allows to
select devices if several of them are connected. Either of "conn=vid.pid"
or "conn=bus.addr" formats are supported and were tested.
This implementation detects and displays SIGMA and SIGMA2 devices. Though
their function is identical, users may want to see the respective device
name. Optionally detect OMEGA devices, too (compile time option, off by
default), though they currently are not supported beyond detection. They
just show up during scans for ASIX logic analyzers, and users may want to
have them listed, too, for awareness.
This implementation also improves robustness when devices get disconnected
between scan and use. The open and close routines now always create the
FTDI contexts after the code has moved out of the scan phase, where common
USB support code is used.
This resolves bug #841.
Eliminate an unnecessary magic number for the maximum filename length of
SIGMA netlists. Use a more compact source code phrase to "unclutter" the
list of filenames and their features/purpose. Move the filesize limit to
the list of files to simplify future maintenance.
Introduce a text to number conversion routine which is more general than
sr_atol() is. It accepts non-decimal numbers, with optional caller given
or automatic base, including 0b for binary. It is not as strict and can
return the position after the number, so that callers can optionally
support suffix notations (units, or scale factors, or multiple separated
numbers in the same text string).
Address style, robustness, and usability nits in the common endianess
conversion helpers in the libsigrok-internal.h header file. Rephrase
preprocessor macros as static inline C language functions to eliminate
side effects, and improve data type safety. Provide macros under the
previous names for backwards compatibility, so that call sites can
migrate to the routines at their discretion (or not at all).
Performance is not affected. Inline routines are identically accessible
to compiler optimizers as preprocessor macros with their text expansion
are. Resulting machine code should be the same.
Introduce variants which also increment the read or write position in
the byte stream after data transfer. This reduces more redundancy at
call sites.
The UT32x driver requires a user spec for the connection. The device
cannot get identified, that's why successful open/close for the port
will suffice. Lack of an input spec as well as failure in the early
scan phase will terminate the scan routine early.
When we reach the end of the scan which creates the device instance
and registers it with the list of found devices, the port already
is closed and the list of devices will never be empty. Remove the
redundant close call and the dead branch which frees the serial port.
src/hardware/siglent-sds/protocol.c: In function 'siglent_sds_get_digital':
src/hardware/siglent-sds/protocol.c:382:35: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
if (data_low_channels->len <= samples_index) {
^
src/hardware/siglent-sds/protocol.c:391:36: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
if (data_high_channels->len <= samples_index) {
^
src/hardware/siglent-sds/protocol.c:417:32: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
for (long index = 0; index < tmp_samplebuf->len; index++) {
^
In file included from src/hardware/siglent-sds/protocol.c:37:0:
src/hardware/siglent-sds/protocol.c: In function 'siglent_sds_receive':
src/hardware/siglent-sds/protocol.h:28:20: warning: format '%li' expects argument of type 'long int', but argument 3 has type 'uint64_t {aka long long unsigned int}' [-Wformat=]
#define LOG_PREFIX "siglent-sds"
^
./src/libsigrok-internal.h:815:41: note: in expansion of macro 'LOG_PREFIX'
#define sr_dbg(...) sr_log(SR_LOG_DBG, LOG_PREFIX ": " __VA_ARGS__)
^
src/hardware/siglent-sds/protocol.c:564:6: note: in expansion of macro 'sr_dbg'
sr_dbg("Requesting: %li bytes.", devc->num_samples - devc->num_block_bytes);
^
src/hardware/siglent-sds/protocol.c: In function 'siglent_sds_get_dev_cfg_horizontal':
src/hardware/siglent-sds/protocol.h:28:20: warning: format '%lu' expects argument of type 'long unsigned int', but argument 3 has type 'uint64_t {aka long long unsigned int}' [-Wformat=]
#define LOG_PREFIX "siglent-sds"
^
./src/libsigrok-internal.h:815:41: note: in expansion of macro 'LOG_PREFIX'
#define sr_dbg(...) sr_log(SR_LOG_DBG, LOG_PREFIX ": " __VA_ARGS__)
^
src/hardware/siglent-sds/protocol.c:933:2: note: in expansion of macro 'sr_dbg'
sr_dbg("Current memory depth: %lu.", devc->memory_depth_analog);
^
There are cases where the connect() call returns EBUSY when trying to
connect to a device. This has been observed when sampling an RDTech
UM24C. In this case, scanning the device works fine. However, when
sampling the device, Sigrok first scans the device, then closes the
connection and re-opens it to sample the device. If the close/open
calls happen in close successions, the Bluetooth stack sometimes
returns EBUSY.
Work around this issue by retrying if the connect() returns EBUSY.
Signed-off-by: Andreas Sandberg <andreas@sandberg.pp.se>
Since the device should be closed after the scan, close it in sr_modbus_scan.
Alternatively, every single driver could close the device after calling
sr_modbus_scan. This causes duplicated code, is prone to forgetting it and it
wasn't the calling driver who opened the device in the first place.
This change unbreaks maynuo-m97 and rdtech-dps.
Changing triggers (e.g. from low to high) would sometimes cause the
acquisition to seemingly "hang" due to missing variable initializations
(in reality the device would wait for incorrect triggers and/or on
incorrect channels).
This fixes bug #1535.
Prefer the common conversion helper for little endian 16bit signed data.
The previous local implementation only worked for positive values, and
yielded incorrect results for negative temperatures.
This fixes bug #1463.
src/resource.c:414: warning: unbalanced grouping commands
conversion.c:81: warning: argument 'lo_thr' from the argument list of sr_a2l_schmitt_trigger has multiple @param documentation sections
src/analog.c:611: warning: return value 'SR_ERR_ARG' of sr_rational_div has multiple documentation sections
src/device.c:205: warning: explicit link request to 'TRUE' could not be resolved
src/device.c:205: warning: explicit link request to 'FALSE' could not be resolved
src/device.c:231: warning: explicit link request to 'TRUE' could not be resolved
src/device.c:231: warning: explicit link request to 'FALSE' could not be resolved
src/serial.c:246: warning: explicit link request to 'NULL' could not be resolved
src/strutil.c:602: warning: explicit link request to 'NULL' could not be resolved
src/device.c:94: warning: unable to resolve reference to 'sr_channel_free()' for \ref command
src/strutil.c:597: warning: unable to resolve reference to 'sr_hexdump_free()' for \ref command
src/strutil.c:622: warning: unable to resolve reference to 'sr_hexdump_new()' for \ref command
src/device.c:430: warning: The following parameters of sr_dev_inst_channel_add(struct sr_dev_inst *sdi, int index, int type, const char *name) are not documented: parameter 'sdi'
src/session.c:163: warning: The following parameters of fd_source_new(struct sr_session *session, void *key, gintptr fd, int events, int timeout_ms) are not documented: parameter 'events'
Since we've now seen lots of devices in the wild that come with the
"HCS-" prefix in the ID, it's probably safe to assume all of them
could have it.
This fixes bug #1530.
The previous implementation got stuck in an infinite loop when data
acquisition started, but the device got disconnected before the data
acquisition terminates. An implementation detail ignored communication
errors, and never saw the expected condition that was required to
continue in the sample download sequence. Unbreak that code path.
Even though the devices/websites/manuals usually say 0..30V, the
hardware actually accepts up to 31V, both via serial as well as by
simply rotating the knob on the device (and our driver already
reflects that).
The same is true for current, it's usually 0..5A as per docs, but many
(probably all) devices accept 5.1A via serial and knob.
Thus, set the max current of all devices to 5.1A (or 3.1A for 3A
devices). We're assuming they all have this property, and we've seen
this in practice on at least three different versions of the device.
On a recently acquired Korad KA3005P power supply, the ID supplied by the
device is not known by libsigrok.
$ sigrok-cli --driver=korad-kaxxxxp:conn=/dev/ttyACM0 --scan
sr: korad-kaxxxxp: Unknown model ID 'KORAD KA3005P V4.2' detected, aborting.
This fixes bug #1522.
Thanks to bitaround@gmail.com for the amperage fix.
The DMMs report as an event to which mode the user switched (by turning the
rotary switch): "*0", "*1", etc.
Most other DMMs have few modes, but the U127x DMMs have up to 11 different
modes (i.e., "*10" is a valid event).
The @brief keyword is not needed since JAVADOC_AUTOBRIEF is enabled in the
Doxygen configuration. Remove a remaining "DIAG" prefix in a debug
message, the output already is filtered according to the log level, and
prefixed by the module name.
Don't clobber the user provided samplerate (specified by input module
options). This allows to re-determine the samplerate from a potentially
changed file on disk upon reload.
The list of previously created channels is kept across file reloads. We
also need to keep (or re-create) the list of channels that are used for
datafeed submission of analog data.
Release more allocated resources in the .cleanup() routine, and do reset
internal state such that a .reset() thus .cleanup() then .receive() call
sequence will work. This code path is taken for re-import of files (see
bug #1241, CSV was affected).
Slightly unobfuscate the "end of current input chunk" marker in the data
processing loop. Make the variable's identifier reflect that it's not a
temporary, but instead something worth keeping around until needed again.
Unbreak the calculation of line numbers in those situations where input
chunks (including previously accumulated unprocessed data) happens to
start with a line termination. This covers input files which start with
empty lines, as well as environments with mutli-byte line termination
sequences (CR/LF) and arbitrary distribution of bytes across chunks.
This fixes bug #968.
Accept when there is no line termination in the current input chunk. We
cannot assume that calling applications always provide file content in
large enough chunks to span complete lines. And any arbitrary chunk size
which applications happen to use can get exceeded by input files (e.g.
for generated files with wide data or long comments).
Mention the required synchronization of default option values and format
match logic in a prominent location where options are discussed.
Update the TODO list for the CSV input module. Mixed signal handling got
fixed by rearranging channel creation. Samplerate can get determined
from timestamp columns already. The double data type for analog data
remains.
The previous implementation incompletely handled arbitrary data type
oders in mixed signal input files. Rearrange the logic such that all
format specs get parsed first, then all channel creation details get
determined, then all channels get created. It appears to be essential
that all logic channels get created first, resulting in index numbers
starting at 0 and addressing the correct position in bitfields. The
analog channels get created after all logic channels exist. Adjacent
number ranges for channel types also results in more readable logic in
other locations.
This was tested with -I csv:column_formats=t,2a,l,a,x4,a,-,b3 example
data, and works as expected.
Implement .format_match() support in the CSV input module. Simple
multi-column files will automatically load without an "-I csv" spec.
Non-default option values still do require the module selection before
options can get passed to the module.
Try to balance a compact format and completeness/accuracy of content for
builtin help texts for the CSV input module's options. Assume a technical
audience (this is signal analysis software after all).
Rename a few internal identifiers which help organize the list of options
and their help texts. Too short names became obscure.
Accept 't' format specs for timestamp columns. Automatically derive the
samplerate from input data when timestamps are available and the user
did not provide a rate. Stick with a simple approach for robustness
since automatic detection is easy to override when it fails. This
feature is mostly about convenience.
The previous implementation open coded type checks by comparing an enum
value against specific constants. Which was especially ugly since there
are multiple types which all are logic, and future column types neither
get ignored nor have channels associated with them.
Improve readability and robustness by adding helpers which check classes
(ignore, logic, analog) instead of the multitude of -/x/o/b|l/a variants
within the classes.
Also comment on the order of channel creation, and how to improve it.
Expand the developer comment that's inline in the source file. There is
enough room to explain things. Try to come up with a "one-line" help text
that is precise yet compact, and does inform the user of available format
choices including modifiers. Chances are that longer descriptions start
reducing the usefulness of the help or the visibility of options when
users are in a hurry. Those who care can access the manual.
Mark more options as obsolete, and mention more default values in the
builtin help text. Also tweak a comment on getting channel names from
header lines.
Support for mixed signal CSV input data is desirable and should be
possible. The current implementation just happens to not fully cope with
arbitrary mixes of data types in columns yet. Add a quick workaround,
but also a TODO item to properly address the topic later.
Extend the CSV input module which was strictly limited to logic data so
far. Add support for analog data types. Implement the 'a' column format,
and feed analog data to the session bus.
This implementation feeds data of individual analog channels to the
session bus in separate packets each. This approach was found to work
most reliably, not all recipients support the submission of multiple
samples for multiple channels in a single packet.
A fixed 'digits' value is used. This needs to get addressed later.
Local experiments suggest that the 'double' data type for analog data
can result in erroneous visual presentation (observed with sigrok-cli).
Use 'float' for now, until the issue is understood and got fixed.
Support for double is prepared internally and is easily enabled.
Address several minor nits. Eliminate unneeded variables. Update text to
number conversion comments including wildcard handling. Remove empty
lines in init() which used to spill out a set of lines which all do the
same thing (evaluate a set of options) and shall belong together.
Move the creation of logic channels to the location where formats fields
get iterated, and column processing details get derived. This reduces a
lot of redundancy, and simplifies the addition of more data formats.
Update the list of TODO items at the top of the CSV input module's
source. Text line handling (counting line numbers) got fixed. Adding
support for analog channels was prepared, as are timestamp columns.
Rename the CSV input module's option keywords. To better reflect their
purpose, and for consistency across the rather complex set of options
and how they interact. Rearrange the list of options (not that the order
matters from the outside, but it's good to have during maintenance).
Update builtin help texts which will show up in applications, as well as
the source code comments which discuss these options in greater detail.
Would be nice to have a "general" help text for input modules which is
not tied to one single option, to provide an overview or use examples.
Arrange the option keys, short and long help texts such that the source
better reflects the applications' screen layout. To better support
future maintenance, again.
Consistently separate multi-work keywords for improved readability.
Prefer underscores over dashes for consistency with common keys in
shared infrastructure in other project sources (device options, MQ
items, etc).
Extend the "column-formats" option support in the CSV input module to
also support wildcards and automatic channel count detection. Move the
format string interpretation to the location where the first data line
or the optional header line are seen. Map the simple options (single
column number and channel count, or first column number and optional
channel count) to a format string, to unify internal code paths. Remove
code paths for the previous specific yet limited scenarios.
Rephrase the condition which keeps executing the "initial receive"
phase. The text line termination sequence gets derived from the first
complete text line, but other essential information is only gathered
later, potentially after skipping a large (user specified) amount of
input data. Keep checking for this essential setup data until data or
the header actually were seen, before regular processing of input data
starts.
Extend the CSV input module, introduce support for the "column-formats="
option. This syntax can express the previous single- and multi-column
semantics, as well as any arbitrary order of to-get-ignored, and single-
and multi-bit columns in several formats.
The previous "simple" keywords for single and multi column modes still
are in place, it's yet to get determined whether to axe them. Depends on
whether users can handle the format strings for these simple cases.
The previous implementation allowed CSV input files to use any line
termination in either CR only, LF only, or CR/LF format. The first EOL
was searched for and was recorded, but then was not used. Instead any of
CR or LF were considered a line termination. "Raw data processing" still
was correct, but line numbers in diagnostics were way off, and optional
features like skipping first N lines were not effective. Fix that.
Source code inspection suggests the "startline" feature did not work at
all. The user provided number was used in the initial optional search
for the header line (to get signal names) or auto-determination of the
number of columns. But then was not used when the file actually got
processed.
Reduce "state" in the CSV input module's context. Stick with variables
that are local to routines when knowledge of details need not be global.
Really base the processing of a column's input text on the column's
processing information which was gathered in the setup phase.
Rename few identifiers, to explicitly refer to logic channels (the only
currently supported data type of the CSV input module). Cease feeding
logic data to the session bus when there are no logic channels at all
(currently not really an option). Prepare for simpler dispatching of
parse routines should more data types get added in a future version.
Reduce some "clutter" (overly fragmented stuff that should go together
since it forms logical groups and is not really standalone). Address a
few more minor style nits (sizeof() redundancy, "seemingly inverse"
string comparison phrases).
Improve the code paths which determine logic channels' names from an
optional CSV file header line. Strip optional quotes from the column's
input text (re-use a SCPI helper routine for that). Also use the channel
name for multi-bit fields, append [0] etc suffixes in that case. Comment
on the manipulation of input data, which is acceptable since that very
data won't get processed another time in another code path.
Rephrase the CSV input module's implementation such that generic support
to "process a column" becomes available. All columns of an input file's
text line get inspected, a column can either get ignored, or converted
to logic data. A future version can then remove the current limitations
of single- and multi-column modes (either one single multi-bit cell, or
multiple single-bit cells which must be adjacent).
Combine the bin/oct/hex parse routines into one routine which handles up
to four bits per input number digit with common logic. Availability of
more data than channels (according to user specs) is not fatal.
Drop the counter intuitive "first-channel" option, use "first-column"
instead. Warn when comment leader and column separator are identical
(was silent before, may be unexpected). Extend diagnostics and address
minor readability nits, update comments. Rephrase logic channel name
assignment.
Use simple scalar options to derive generic processing details: Either
'single-column' and 'numchannels' are required, with an optional
'format' spec (resulting in single-column mode). Or 'first-column' with
an optional 'numchannels' (multi-column mode with fixed format, using
all available columns by default). The default is multi-column mode with
one logic channel per column and spanning all columns on a text line.
Don't clobber the value of the user provided 'header' option. Use a
separate flag to track whether the header line was seen before, or
needs to get skipped when it passes by.
Move the communication of the samplerate meta packet to the very spot
where logic sample data gets sent. This allows to optionally determine
late the samplerate, potentially from input data instead of user specs.
Move the helper routines which arrange for the data feed to an earlier
spot, so that references resolve without forward declarations. Rename
routines to reflect that they deal with logic data.
Slightly unobfuscate column text to logic data conversion, and reduce
redundancy. Move sample data preset to a central location.
Rephrase error messages, provide stronger hints as to why the input text
for a conversion was considered invalid.
The previous implementation assumed that in multi-column mode each cell
communicates exactly one bit of input (a logic channel). But only the
first character got tested. Tighten the check, to cover the whole input
text. This rejects fully invalid input, as well as increases robustness
since multi-bit input like "100" was mistaken as a value of 1 before.
Add documentation to the bin/hex/oct text parse routines, and move the
bin/hex/oct dispatcher to the location where its invoked routines are.
Stick with a TODO comment for parse_line() to reduce the diff size.
The parse_line() routine is rather complex, optionally accepts an upper
limit for the number of columns, but unconditionally assumes a first one
and drops preceeding fields. The rather generic n and k identifiers are
not helpful.
Use the 'seen' and 'taken' names instead which better reflect what's
actually happening. Remove empty lines which used to tear apart groups
of instructions which are strictly related. This organization neither
was helpful during maintenance.
Accept when comments are indented, trim the whitespace from text lines
after stripping off the comment. This avoids the processing of lines
which actually are empty, and improves robustness (avoids errors for a
non-fatal situation). Also results in more appropriate diagnostics at
higher log levels.
The CSV input module has grown so many options, that counting them by
hand became tedious and error prone. Eliminate the magic numbers in the
associated code paths.
This also has the side effect that the set is easy to re-order just by
adjusting the enum, no other code is affected. Help text and default
values is much easier to verify and adjust with the symbolic references.
[ see 'git diff --word-diff' for the essence of the change ]
Use size_t for things that get counted: column indices, channel numbers
(line numbers already used size_t). De-anonymize an enum to avoid 'int'
where it gets referenced. Adjust printf(3) format strings. Get unsigned
values from option lookups (stick with 32bits, should be acceptable for
spreadsheet columns and channel counts).
Address other minor nits while we are here: Also terminate the last item
in an enum declaration. Add a doxygen comment for parse_line(). Rename a
parameter to achieve tabular doc text layout.
Rephrase the #include statements in the CSV input module. "config" is
not a system header but is provided by the application source code.
Separate the config and system and application groups (their order is
essential). Alpha-sort the files within their group for simplified
maintenance.
Do for the CSV input module what commit 08f8421a9e did for VCD. Check
the channel list for consistency across re-imports of the same file.
This addresses the CSV part of bug #1241.
The cleanup() routine gets invoked upon shutdown, as well as before
re-importing another file. The cleanup() routine must not release
resources which get allocated in the init() routine, as the init()
routine won't run again in the module's lifetime. The cleanup() routine
must void those context fields which get evaluated in the next receive()
calls.
The previous implementation inspected the input stream's samplerate, and
simply used the next 1kHz/1MHz/1GHz timescale for VCD export. Re-import
of the exported file might suffer from rather high an overhead, where
users might have to downsample the input stream. Also exported data
might use an "odd" timescale which doesn't represent the input stream's
timing well.
Rephrase the samplerate to VCD timescale conversion such that the lowest
frequency is used which satisfies the file format's constraints as well
as provides high enough a resolution to communicate the input stream's
timing with minimal loss. Do limit this scaling support to at most three
orders above the input samplerate, to most appropriately cope with odd
rates.
As a byproduct the rephrased implementation transparently supports rates
above 1GHz. Input streams with no samplerate now result in 1s timescale
instead of the 1ms timescale of the previous implementation.
The 'period' member of the VCD output module's context is supposed to
hold frequencies that correspond to the timescale used during export.
An 'int' (in combination with VCD's 1/10/100 constraint) thus would
result in a 1GHz limit, use uint64_t instead to support higher rates.
Iterate over the received sample set first, before iterating over the
respective sample's number of channels. This avoids redundant extraction
of sampled bits (which saves only little), but also increases locality
of processed data (though string accumulation still may be expensive).
It also adds the future option of RLE compression during accumulation of
output data, which perfectly matches the WaveDrom syntax for repeated
bit patterns.
Rearrange the order of routines in the wavedrom output module. Keep the
flow of .receive() -> .process_logic() -> .wavedrom_render() in one common
group of routines, which is not disrupted by the .init() and .cleanup()
routines which are kind of boilerplate in the source file. This increases
readability and maintainability.
Adjust brace style, use C language comments, drop camel case. Use size_t
for indices and offsets. Unobfuscate the open/close logic of rendered
output. Allocate zero-filled memory, reduce sizeof() redundancy. Don't
SHOUT in the module's .name property.
[ Changes indentation, see 'git diff -w -b' for review. ]