top of page

Parent-Child Vocal Interaction Data

Thank you! We will contact you soon.

Enter your email below for pricing and additional information. For a faster reply, please contact - contact@deeplyinc.com

Summary

The interactions between 24 pairs of parent and child (48 speakers total), such as reading fairy tales, singing children’s songs, conversing, and others, are contained within this dataset. The recordings took place in 3 different types of places, which include an anechoic chamber, studio apartment, and dance studio, of which the level of reverberation differs. In order to examine the effect of the distance from the mic to the speaker, each interaction is recorded at 3 distinct distances with 2 types of smartphone, iPhone X, and Galaxy S7.

There was one parent and his/her child in a group, and each group is distinguished by a unique key(subject ID). Two speakers(speaker a(parent), speaker b(child)) were sitting on the floor(purple area in the pictures below), and asked to do 3 types of interactions: singing children's song, reading fairy tales, and singing lullabies.

Recording environment : Studio Apartment (moderate reverb), Dance studio (high reverb),
Anechoic Chamber (no reverb)

Device : iPhone X(iOS)Samsung Galaxy S7

Recording distance from the source : 0.4m, 2.0m, 4.0m

Volume(Sample) : ~ 282(~ 3) hours, ~ 360,000(~ 4,000) utterances, ~ 110(~ 0.4) GB

Format : wav/h5(16/44.1kHz, 16-bit, mono)

Language : Korean

Studio Apartment

StudioApartment.jpeg

Dance Studio

DanceStudio.jpeg

Anechoic Chamber

AnechoicChamber.jpeg

Refer to the dataset descriptions in 'docs' for details and statistics of the entirety of the dataset.

The dataset is a subset (approximately 1%) of a much bigger dataset which were recorded under the same circumstances as these open source samples.

 

Please contact us (contact@deeplyinc.com) for the pricing and licensing.

Featured Parent Child Vocal Interaction Sound Sample

  • Child reading

  • Child refusing

  • Child singing

  • Parent reading

  • Parent singing

    And more sounds!

00:00 / 00:02
00:00 / 00:03
00:00 / 00:01
00:00 / 00:04
00:00 / 00:01

Dataset statistics

The illustrations below are the statistics about the Deeply Parent-Child Vocal Interaction Dataset. The first two graphs are from the sample dataset, and the others are from the full dataset. To gain more insight about the dataset, please refer to the detailed description in 'docs' and 'Korea_Read_Speech_Corpus.json' in 'Dataset'.

The sample is a partial set of recordings from a single subject group(sub3004), which consists of a 39-year-old female (parent, speaker a) and 5-year-old male (child, speaker b).

fig0 (2).png
fig1 (2).png
fig2 (2).png
fig3 (2).png
fig4 (2).png
fig5 (2).png

Structure

├── dataset

│   ├── AirbnbStudio

│   │   ├── sub30040a00000.wav

│   | └── ...

│   ├── AnechoicChamber

│   │   ├── sub30042a00000.wav

│   | └── ...

│   └── DanceStudio

│   ├── sub30041a00000.wav

│   └── ...

└── docs

   ├── Deeply\ Parent-Child\ Vocal\ Interaction\ Dataset_Eng.pdf

   └── Deeply\ Parent-Child\ Vocal\ Interaction\ Dataset_Kor.pdf

Parent_Child_Vocal_Interaction.json

 

{'AirbnbStudio':

               {'sub30040a00000': {'label': 2,

                                   'subjectID': 'sub3004',

                                   'speaker': 'a',

                                   'age': 39,

                                   'sex': 0,

                                   'noise': 0,

                                   'location': 0,

                                   'distance': 0,

                                   'device': 0,

                                   'rms': 0.005859313067048788,

                                   'length': 1.521},

               ...

               },

...

}

How to decode

label: {speaker a(parent): {0: singing, 1: reading, 2: other utterances},
speaker b(child): {0: singing, 1: reading, 2: crying, 3: refusing, 4: other utterances}}

Subject ID: Unique 'sub + 4-digit' key allocated to each subject group

Speaker: unique key allocated to each individual in the subject group.

Sex: {0: Female, 1: Male}

Noise: {0: Noiseless, 1: Indoor noise, 2: Outdoor noise, 3: Both indoor/outdoor noise}

Location: {0: Studio apartment, 1: Dance studio, 2: Anechoic chamber}

Distance: {0: 0.4m, 1: 2.0m, 2: 4.0m}

Device: {0: iPhone X, 1: Galaxy S7}

Rms: Root mean square value of the signal

Length: length of the signal in seconds

* In polyphonic utterances, the categories, such as label, speaker, and sex, are longer than usual, because we've written the information of both speaker a and b in the same category.

For example, if speaker a(parent, male, 35 yo) sings and speaker b(child, female, 3 yo) tries to talk in a single audio file, speaker would be 'ab', sex would be '10'(male(speaker a), female(speaker b)), and label would be '04'(singing(speaker a), other utterances(speaker b)) . *

License

Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)

Thank you! We will contact you soon.

Enter your email below for pricing and additional information. For a faster reply, please contact - contact@deeplyinc.com

컬러1_대지 1_가로.png

Office : E02, Space Sallim 2F, 10, Noryangjin-ro, Dongjak-gu, Seoul, Republic of Korea

Tel : +82 70-7459-0704

E-mail : contact@deeplyinc.com

Copyright © Deeply, Inc. All rights reserved.

bottom of page