sed_eval.sound_event.SegmentBasedMetrics

class sed_eval.sound_event.SegmentBasedMetrics(event_label_list, time_resolution=1.0)[source]

Constructor

Parameters:

event_label_list : list, numpy.array

List of unique event labels

time_resolution : float (0,]

Segment size used in the evaluation, in seconds. Default value 1.0

__init__(event_label_list, time_resolution=1.0)[source]

Constructor

Parameters:

event_label_list : list, numpy.array

List of unique event labels

time_resolution : float (0,]

Segment size used in the evaluation, in seconds. Default value 1.0

Methods

__init__(event_label_list[, time_resolution]) Constructor
class_wise_accuracy(event_label[, factor]) Class-wise accuracy metrics (sensitivity, specificity, accuracy, and balanced_accuracy)
class_wise_count(event_label) Class-wise counts (Nref and Nsys)
class_wise_error_rate(event_label) Class-wise error rate metrics (error_rate, deletion_rate, and insertion_rate)
class_wise_f_measure(event_label) Class-wise f-measure metrics (f_measure, precision, and recall)
evaluate(reference_event_list, ...[, ...]) Evaluate file pair (reference and estimated)
overall_accuracy([factor]) Overall accuracy metrics (sensitivity, specificity, accuracy, and balanced_accuracy)
overall_error_rate() Overall error rate metrics (error_rate, substitution_rate, deletion_rate, and insertion_rate)
overall_f_measure() Overall f-measure metrics (f_measure, precision, and recall)
reset() Reset internal state
result_report_class_wise() Report class-wise results
result_report_class_wise_average() Report class-wise averages
result_report_overall() Report overall results
result_report_parameters() Report metric parameters
results() All metrics
results_class_wise_average_metrics() Class-wise averaged metrics
results_class_wise_metrics() Class-wise metrics
results_overall_metrics() Overall metrics