Parent-Child Vocal Interaction Data

Thank you! We will contact you soon.

Enter your email below for pricing and additional information. For faster reply, please contact - contact@deeplyinc.com

Summary

The interaction of 24 pairs of parent and child(total 48 speakers), such as reading fairy tales, singing children’s songs, conversing, and others, is recorded. The recordings took place in 3 different types of places, which are an anechoic chamber, studio apartment, and dance studio, of which the level of reverberation differs. And in order to examine the effect of the distance of mic from the source and device, every experiment is recorded at 3 distinct distances with 2 types of smartphone, iPhone X, and Galaxy S7.

There were a parent and his/her child in a group, and each group was distinguished by a unique key(subject ID). Two speakers(speaker a(parent), speaker b(child)) were sitting on the floor(purple area in the pictures below), and asked to do 3 types of interactions, singing children's song, reading fairy tales, and singing lullabies.

Recording environment : Studio Apartment (moderate reverb), Dance studio (high reverb),
Anechoic Chamber (no reverb)

Device : iPhone X(iOS)Samsung Galaxy S7

Recording distance from the source : 0.4m, 2.0m, 4.0m

Volume(Sample) : ~ 282(~ 3) hours, ~ 360,000(~ 4,000) utterances, ~ 110(~ 0.4) GB

Format : wav/h5(16/44.1kHz, 16-bit, mono)

Language : Korean

Studio Apartment

StudioApartment.jpeg

Dance studio

DanceStudio.jpeg

Anechoic Chamber

AnechoicChamber.jpeg

Refer to the dataset descriptions in 'docs' for detailed description and statistics of the full set of the dataset.

The dataset is a subset(approximately 1%) of a much bigger dataset which were recorded under the same circumstances as these open source samples.

 

Please contact us (contact@deeplyinc.com) for the pricing and licensing.

Featured Parent Child Vocal Interaction Sound Sample

  • Child reading

  • Child refusing

  • Child singing

  • Parent reading

  • Parent singing

    And more Sound!

00:00 / 00:02
00:00 / 00:03
00:00 / 00:01
00:00 / 00:04
00:00 / 00:01

Dataset statistics

The illustrations below are the statistics about the Deeply Parent-Child Vocal Interaction Dataset. The first two are from the sample dataset, And the others are from the full dataset. To attain more insight about the dataset, please refer to the detailed description in 'docs' and 'Korea_Read_Speech_Corpus.json' in 'Dataset'.

The sample is a partial set of recordings from a single subject group(sub3004), which consists of 39-year-old female(parent, speaker a) and 5-year-old male(child, speaker b).

fig0 (2).png
fig1 (2).png
fig2 (2).png
fig3 (2).png
fig4 (2).png
fig5 (2).png

Structure

├── dataset

│   ├── AirbnbStudio

│   │   ├── sub30040a00000.wav

│   | └── ...

│   ├── AnechoicChamber

│   │   ├── sub30042a00000.wav

│   | └── ...

│   └── DanceStudio

│   ├── sub30041a00000.wav

│   └── ...

└── docs

   ├── Deeply\ Parent-Child\ Vocal\ Interaction\ Dataset_Eng.pdf

   └── Deeply\ Parent-Child\ Vocal\ Interaction\ Dataset_Kor.pdf

Parent_Child_Vocal_Interaction.json

 

{'AirbnbStudio':

               {'sub30040a00000': {'label': 2,

                                   'subjectID': 'sub3004',

                                   'speaker': 'a',

                                   'age': 39,

                                   'sex': 0,

                                   'noise': 0,

                                   'location': 0,

                                   'distance': 0,

                                   'device': 0,

                                   'rms': 0.005859313067048788,

                                   'length': 1.521},

               ...

               },

...

}

How to decode

label: {speaker a(parent): {0: singing, 1: reading, 2: other utterances},
speaker b(child): {0: singing, 1: reading, 2: crying, 3: refusing, 4: other utterances}}

Subject ID: Unique 'sub + 4-digit' key allocated to each subject group

Speaker: unique key allocated to each individual in the subject group.

Sex: {0: Female, 1: Male}

Noise: {0: Noiseless, 1: Indoor noise, 2: Outdoor noise, 3: Both indoor/outdoor noise}

Location: {0: Studio apartment, 1: Dance studio, 2: Anechoic chamber}

Distance: {0: 0.4m, 1: 2.0m, 2: 4.0m}

Device: {0: iPhone X, 1: Galaxy S7}

Rms: Root mean square value of the signal

Length: length of the signal in seconds

* In polyphonic utterances, the categories, such as label, speaker, and sex, are longer than usual, because we've written the information of both speaker a and b in the same category.

For example, if speaker a(parent, male, 35 yo) sings and speaker b(child, female, 3 yo) tries to talk in a single audio file, speaker would be 'ab', sex would be '10'(male(speaker a), female(speaker b)), and label would be '04'(singing(speaker a), other utterances(speaker b)) .

License

Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)

Thank you! We will contact you soon.

Enter your email below for pricing and additional information. For faster reply, please contact - contact@deeplyinc.com

2_Horziontal_transparent.png

Office : E02, Space Sallim 2F, 10, Noryangjin-ro, Dongjak-gu, Seoul, Republic of Korea

Tel : +82 70-7459-0704

E-mail : contact@deeplyinc.com

Copyright © Deeply, Inc. All rights reserved.