How to use the wfdb.processing function in wfdb

To help you get started, we’ve selected a few wfdb examples, based on popular ways it is used in public projects.

Secure your code as it's written. Use Snyk Code to scan source code in minutes - no build needed - and fix issues immediately.

github JohnDoenut / biopeaks / benchmark_resp.py View on Github external
# annotation places peak towards the middle); for non-clipped peaks, the
    # agreement of manual and algorithmic annotation is within a range smaller
    # than 700 msec (enable plotting to confirm)

    # unilateral extend of acceptance margin centered on each algopeak; in sec
    acceptance_margin = 0.7

    comparitor1 = processing.compare_annotations(manupeaks1,
                                                 np.ravel(algopeaks),
                                                 int(acceptance_margin *
                                                     sfreq))
    tp1 = comparitor1.tp
    fp1 = comparitor1.fp
    fn1 = comparitor1.fn

    comparitor2 = processing.compare_annotations(manupeaks2,
                                                 np.ravel(algopeaks),
                                                 int(acceptance_margin *
                                                     sfreq))
    tp2 = comparitor2.tp
    fp2 = comparitor2.fp
    fn2 = comparitor2.fn

    # calculate two metrics for benchmarking (according to AAMI guidelines):
    # 1. sensitivity: how many of the manually annotated peaks does the
    # algorithm annotate as peaks (TP / TP + FN)?
    # 2. precision: out of all peaks that are algorithmically annotated as
    # peaks (TP + FP), how many are correct (TP)?
    sensitivity1.append(float(tp1) / (tp1 + fn1))
    precision1.append(float(tp1) / (tp1 + fp1))
    sensitivity2.append(float(tp2) / (tp2 + fn2))
    precision2.append(float(tp2) / (tp2 + fp2))
github JohnDoenut / biopeaks / benchmark_resp.py View on Github external
#    plt.scatter(algopeaks, resp[algopeaks], c='g', marker='X', s=150)

    # perform benchmarking against each annotator seperately; an
    # algorythmically annotated peaks is scored as true positives if it is
    # within 700 msec of a manually annotated peak, this relatively large
    # window was chosen to account for the clipping that occurs in many of the
    # recordings; clipping results in large palteaus that make placement of
    # peak somewhat arbitrary (algorithm places peak at the edges while manual
    # annotation places peak towards the middle); for non-clipped peaks, the
    # agreement of manual and algorithmic annotation is within a range smaller
    # than 700 msec (enable plotting to confirm)

    # unilateral extend of acceptance margin centered on each algopeak; in sec
    acceptance_margin = 0.7

    comparitor1 = processing.compare_annotations(manupeaks1,
                                                 np.ravel(algopeaks),
                                                 int(acceptance_margin *
                                                     sfreq))
    tp1 = comparitor1.tp
    fp1 = comparitor1.fp
    fn1 = comparitor1.fn

    comparitor2 = processing.compare_annotations(manupeaks2,
                                                 np.ravel(algopeaks),
                                                 int(acceptance_margin *
                                                     sfreq))
    tp2 = comparitor2.tp
    fp2 = comparitor2.fp
    fn2 = comparitor2.fn

    # calculate two metrics for benchmarking (according to AAMI guidelines):

wfdb

The WFDB Python package: tools for reading, writing, and processing physiologic signals and annotations.

MIT
Latest version published 1 year ago

Package Health Score

62 / 100
Full package analysis

Similar packages