The last release (0.4.0) had the libtool version (current:revision:age)
set to 3:0:0. Since this release adds, removes, and changes interfaces,
the new version is 4:0:0.
http://www.gnu.org/software/libtool/manual/libtool.html#Updating-version-info
This changes the library filename (e.g. on Linux) from libsigrok.so.3.0.0
to libsigrok.so.4.0.0, the SONAME (+symlink) becomes libsigrok.so.4.
File template by Stefan Brüns, thanks!
This fixes bug #857.
(the XML file is moved from PulseView to libsigrok since this is not
PulseView-specific)
Add a 48x48 PNG and a scalable SVG for the MIME type as well.
Install the XML file and in the icons in the respective standard paths.
This is more specific and prevents any potential issues e.g. when
multiple distro packages might ship with a generic file like
sigrok-logo-notext.png that's supposed to be installed in the same place.
The Input::send() method allocated glib strings and copied input data,
but only released the glib string container and leaked the actual string
content. This led to leaks in the size of the verbatim input file, which
is especially wasteful for uncompressed textual representations.
This fixes bug #976.
When the processing of columns of text lines detected errors, the loop
was aborted and the routine was left, but allocated resources were not
freed. Fix the remaining memory leaks in the error code paths.
The constant at the top of the source file is the number of samples in a
datafeed submission chunk. The previous implementation erroneously made
it the size in bytes. There is no need to round down the buffer size
according to the unit size.
The previous implementation sent one sigrok session datafeed packet per
processed CSV line. This is rather inefficient for the CSV input module,
and triggers a dramatic performance loss in the srzip output format.
Communicate up to 128K samples within one datafeed packet. This fixes
bug #695.
Factor out repeated calculation of the unit size which is derived from
the channel count. Fix a minor memory leak in an error path while we are
here. (Other memory leaks in rare error paths remain with this commit.)
Suggested-By: Elias Oenal <sigrok@eliasoenal.com>
On the Windows platform it appears to be popular to _not_ terminate the
very last line in a text file. Which results in an unmet constraint in
the CSV input module and an internal exception in PulseView which aborts
program execution.
Cope with the absence of the text line termination sequence at the very
end of the input stream. Keep all other checks in place, such that only
completely received text lines get processed.
This fixes bug #635.
"Document" the current state of the implementation in the CSV input
module's source code. Discuss how text handling is non-trivial, which
approaches are available and how they have drawbacks.
Mention the lack of support for the import of analog data as well.
The CSV input module supports variable length end-of-line encodings
(either CRLF, or CR, or LF). When a bunch of accumulated text lines got
processed, do skip the corresponding number of characters after the end
of the last processed line.
This fixes one of the issues discussed in bug #635.
The input module runs receive() and end() invocations which end up
calling process_buffer(). It's perfectly legal to call the process
routine with an empty accumulation buffer, especially when the process
routine was called from end().
This fixes a condition where PulseView raised a fatal error at the end
of a completed successful import.
Reported-By: Sergey Alirzaev <zl29ah@gmail.com>
Move an independent test for single/multi column operation out of a code
path that checked for and then processed text lines. This commit does
not change behaviour, but prepares a subsequent commit.
Factor out a magic string literal which held a delimiter set yet could
be mistaken for an (assumed) fixed termination string. Concentrate the
determination of the end-of-line text encoding as well as the resulting
set of possible deliminters in one nearby location. The symbolic name
for the delimiter set eliminates the doubt on its purpose.
Change uninstall-local to uninstall-hook, since the latter is guaranteed
to run last (order is apparently not guaranteed for uninstall-local).
This fixes bug #861.
Handle the case when the sample data memory was filled and has wrapped
around during acquisition. Download the respective part of the data
which is reliably available, only skipping a single 1KB row which might
contain either old or new data while it's not certain which it would be.
This will be essential when triggers later become available. Right now
it copes with user requests for sample counts that exceed the total DRAM
capacity. Instead the maximum available amount of data is provided.
Of course acquisition no longer gets stopped when the end of DRAM is
reached.
Correctly determine the size of a download chunk for the last DRAM row
that's involved in the recent data acquisition.
This commit is based on work done by jry@.
This addresses bug #838 (trailing garbage).
It's assumed that the previously downloaded excess data was "swallowed"
by the sample count enforcement logic that was applied earlier, so the
(remainder of the) issue could have gone unnoticed, unless some other
termination condition than sample count was used.
Configure the samplerate clock and channel count during acquisition
start in identical ways for 50MHz, 100MHz, and 200MHz modes.
This part was inspired by work done by jry@ yet was addressed in
different ways (no exception, do everything in every mode the same way).
Eliminate a portability issue in the previous implementation. Make sure
to send the configuration bytes in the correct order to the hardware.
Don't typecase a struct reference to a bytepointer and hope that the
internal memory representation might fit the external hardware's idea.
Add a comment about sample memory organization in a central spot. This
concentrates knowledge which otherwise would be spread across several
locations all over the driver's codebase, yet is essential to have at
hand during maintenance.
All of the information was determined/updated by jry@ with the help of
Ondrej at Asix when he did lots of fixes and improvements to asix-sigma.
The Asix Sigma hardware does not support a sample count limit. Instead
this optional input parameter gets mapped to a sample time, and some
slack for hardware pipelines and compression gets added. When data
acquisition completes and sample data gets downloaded, chances are that
there is more data than requested by the user.
Do enforce the optional sample count limit. Stop sending data to the
sigrok session when the configured number of samples was sent.
This commit is based on work done by jry@.
This fixes bug #838.
Enhance how the data acquisition is stopped. Wait for the hardware to
flag the successful completion of data retrieval as well as flushing
through hardware pipelines.
Use symbolic identifiers for the mode register's fields (for read as
well as write access).
This commit uses part of a code update to better match the documentation
done by jry@, but not all of it to reduce the size of the commit.