Features¶
Classes for feature handling
FeatureContainer¶
Container class to store features along with statistics and meta data. Class is based on dict through inheritance of FeatureFile class.
Usage examples:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | # Example 1
feature_container = FeatureContainer(filename='features.cpickle')
feature_container.show()
feature_container.log()
print('Feature shape={shape}'.format(shape=feature_container.shape))
print('Feature channels={channels}'.format(channels=feature_container.channels))
print('Feature frames={frames}'.format(frames=feature_container.frames))
print('Feature vector length={vector_length}'.format(vector_length=feature_container.vector_length))
print(feature_container.feat)
print(feature_container.stat)
print(feature_container.meta)
# Example 2
feature_container = FeatureContainer().load(filename='features.cpickle')
# Example 3
feature_repository = FeatureContainer().load(filename_list={'mel':'mel_features.cpickle', 'mfcc':'mfcc_features.cpickle'})
# Example 4
feature_container = FeatureContainer(features=[numpy.ones((100,10)),numpy.ones((100,10))])
|
FeatureContainer (\*args, \*\*kwargs) |
Feature container inherited from dict |
FeatureContainer.show () |
Print container content |
FeatureContainer.log ([level]) |
Log container content |
FeatureContainer.get_path (dotted_path[, ...]) |
Get value from nested dict with dotted path |
FeatureContainer.shape |
Shape of feature matrix |
FeatureContainer.channels |
Number of feature channels |
FeatureContainer.frames |
Number of feature frames |
FeatureContainer.vector_length |
Feature vector length |
FeatureContainer.feat |
Feature data |
FeatureContainer.stat |
Statistics of feature data |
FeatureContainer.meta |
Meta data |
FeatureContainer.load ([filename, filename_dict]) |
Load data into container |
FeatureRepository¶
Feature repository class, where feature containers for each type of features are stored in a dict. Type name is used as key.
FeatureRepository (\*args, \*\*kwargs) |
Feature repository |
FeatureRepository.show () |
Print container content |
FeatureRepository.log ([level]) |
Log container content |
FeatureRepository.get_path (dotted_path[, ...]) |
Get value from nested dict with dotted path |
FeatureRepository.load ([filename_dict]) |
Load file list |
FeatureExtractor¶
Feature extractor class.
Usage examples:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 | # Example 1, to get feature only without storing them
feature_repository = FeatureExtractor().extract(audio_file='debug/test.wav',
extractor_name='mfcc',
extractor_params={
'mfcc': {
'n_mfcc': 10
}
}
)
feature_repository['mfcc'].show()
# Example 2, to store features during the extraction
feature_repository = FeatureExtractor(store=True).extract(
audio_file='debug/test.wav',
extractor_name='mfcc',
extractor_params={
'mfcc': {
'n_mfcc': 10
}
},
storage_paths={
'mfcc': 'debug/test.mfcc.cpickle'
}
)
# Example 3
print(FeatureExtractor().get_default_parameters())
|
FeatureExtractor (\*args, \*\*kwargs) |
Feature extractor |
FeatureExtractor.extract (audio_file[, ...]) |
Extract features for audio file |
FeatureExtractor.get_default_parameters () |
Get default parameters as dict |
FeatureNormalizer¶
Feature normalizer class.
Usage examples:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | # Example 1
normalizer = FeatureNormalizer()
for feature_matrix in training_items:
normalizer.accumulate(feature_matrix)
normalizer.finalize()
for feature_matrix in test_items:
feature_matrix_normalized = normalizer.normalizer(feature_matrix)
# used the features
# Example 2
with FeatureNormalizer() as norm:
norm.accumulate(feature_repository['mfcc'])
for feature_matrix in test_items:
feature_matrix_normalized = normalizer.normalizer(feature_matrix)
# used the features
|
FeatureNormalizer ([stat, feature_matrix]) |
Feature normalizer |
FeatureNormalizer.accumulate (feature_container) |
Accumulate statistics |
FeatureNormalizer.finalize () |
Finalize statistics calculation |
FeatureNormalizer.normalize (feature_container) |
Normalize feature matrix with internal statistics of the class |
FeatureNormalizer.process (feature_data) |
Normalize feature matrix with internal statistics of the class |
FeatureStacker¶
Feature stacking class. Class takes feature vector recipe and FeatureRepository, and creates appropriate feature matrix.
Feature vector recipe
With a recipe one can either select full matrix, only part of with start and end index, or select individual rows from it.
Example recipe:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 | [
{
'method': 'mfcc',
},
{
'method': 'mfcc_delta'
'vector-index: {
'channel': 0,
'start': 1,
'end': 17,
'full': False,
'selection': False,
}
},
{
'method': 'mfcc_acceleration',
'vector-index: {
'channel': 0,
'full': False,
'selection': True,
'vector': [2, 4, 6]
}
}
]
|
See dcase_framework.ParameterContainer._parse_recipe()
how text recipe can be confiniently used to generate
above structure.
FeatureStacker (recipe[, feature_hop]) |
Feature stacker |
FeatureStacker.normalizer (normalizer_list) |
Stack normalization factors based on stack map |
FeatureStacker.feature_vector (feature_repository) |
Feature vector creation |
FeatureStacker.process (feature_data) |
Feature vector creation |
FeatureAggregator¶
Feature aggregator can be used to process feature matrix in a processing windows. This processing stage can be used to collapse features within certain window lengths by calculating mean and std of them, or flatten the matrix into one feature vector.
Supported processing methods:
flatten
mean
std
cov
kurtosis
skew
The processing methods can combined.
Usage examples:
1 2 3 4 5 6 7 8 9 10 | feature_aggregator = FeatureAggregator(
recipe=['mean', 'std'],
win_length_frames=10,
hop_length_frames=1,
)
feature_stacker = FeatureStacker(recipe=[{'method': 'mfcc'}])
feature_repository = FeatureContainer().load(filename_list={'mfcc': 'mfcc.cpickle'})
feature_matrix = feature_stacker.feature_vector(feature_repository=feature_repository)
feature_matrix = feature_aggregator.process(feature_container=feature_matrix)
|
FeatureAggregator (\*args, \*\*kwargs) |
Feature aggregator |
FeatureAggregator.process (feature_data) |
Process features |
FeatureMasker¶
Feature masker can be used to mask segments of feature matrix out. For examples, error segments of signal can be excluded from the matrix.
Usage examples:
1 2 3 4 5 6 7 8 9 10 11 12 13 | feature_masker = FeatureMasker(hop_length_seconds=0.01)
mask_events = MetaDataContainer([
{
'event_onset': 1.0,
'event_offset': 1.5,
},
{
'event_onset': 2.0,
'event_offset': 2.5,
},
])
masked_features = feature_masker.process(feature_repository=feature_repository, mask_events=mask_events)
|
FeatureMasker (\*args, \*\*kwargs) |
Feature masker |
FeatureMasker.process (feature_data) |
Process feature repository |