Protocol for cerebellar battery: controls

Centralized document for how to run each session for controls in the cerebellar battery (2022-2023).

Session 1

Before the participant arrives

  • Gather assessment materials
    • CCAS form and instructions
    • Motor speech instructions
    • Participant history (Qualtrics) 
    • Hearing screening
  • Get two copies of the UCSF consent form (one for us, one for them)
  • Gather special hardware
    • ¼” to ¼” cord
    • Open back headphones
  • Run general audio check to ensure that the levels are approximately accurate  

General preparation information

 ***NOTE*** this is generic information for running participants; this may not entirely apply for cerebellar battery. Use discretion. 

Before the participant arrives

Enter check_audioLevels then click Part A. (full check_audioLevels guide here)

  1. Turn on SPL meter and press Fast/Slow and Level.
  2. Adjust the microphone gain on the black amplifier (the "Your Headphones" and "pp headphones" knobs) until the SPL meter reads 59.5 to 60.5 dB.
  3. Press spacebar to stop.
  4. Turn off SPL meter

Set out a paper copy of the appropriate consent form (highlighted above) on the participant's desk.

Open the lab email and monitor for participant's arrival.

How to access the lab email

  1. In the web browser, open your personal UW email inbox through Outlook. You can do this through MyUW if needed.
  2. In the top right corner, click the circle with your profile picture/initials.
  3. Click Open another mailbox
  4. Enter speechmotor@waisman.wisc.edu and click Open

If it's the appointment time but the participant hasn't called/emailed yet

  1. Find the participant's phone number.
    1. If you have access to Qualtrics (ie if you're on this list), use the Finding Experiment Running Info KB page to find the participant and phone #
    2. If you don't have Qualtrics access, Slack the lab manager and ask for the phone number
  2. Wait 5 minutes past the scheduled appointment time, then call the participant.
    1. If they are at the building, on their way, or can arrive really soon, continue with the appointment
    2. If we have back-to-back participants such that them arriving late is going to mess up another participant, just tell them that the lab manager will email to reschedule
    3. You don't need to leave a message

When the participant arrives

The participant will notify you of their arrival by either emailing speechmotor@waisman.wisc.edu or calling the phone in 544. When they contact you:

  • Confirm they are at the North Entrance (they are next to the soccer fields)
  • Tell them you will meet OUTSIDE the door which says Lobby Clinics
If the participant arrived but you can't confirm that the participant is at the correct entrance, write down their phone number before going downstairs. (The lab manager can give you their phone # if you don't have it.) You may need to check the North Entrance and give them directions by phone, or walk around.

Meet the participant downstairs

Grab the clipboard with the SPEECH STUDY sign, greet them at the entrance, go upstairs, and direct them to 544A.

Participants can remove their mask once in the experiment room. They will need to remove their mask during speaking tasks.

FAQ

Participants are not required to wear a mask on the path we take them from the entrance to the exam room. We are not permitted to exclude participants who don't wear a mask or aren't vaccinated. Talk to the lab manager if you have concerns about this (university-level) policy.

Hardware checks for reaching studies

These should not have any occasion to change between studies, but it is good to check anyway. Note that you should run these checks on the REACHING computer, which is on the cart (not Quiberon). 

Check sampling rate of tablet: 

  1. Windows start button
  2. Wacom tablet properties
  3. Click on device
  4. Needs to be in "Recognition Data" mode

Check refresh rate of monitor: 

  1. Control panel
  2. Display
  3. Adjust resolution
  4. Advanced settings
  5. Monitor-144

 

Consent

Obtain consent using the UCSF consent form. 

"This is a consent form for participation in our study. It tells you about our research and what you will be doing today. In this study you will be doing some brief assessments, some experiments where you use a joystick to make reaching movements, some experiments where you speak into a microphone and listen to your own speech over headphones, and some experiments where you listen to different sounds and answer questions about them. 

If at any time you would like to stop participating in the study, that is okay. You will still be compensated for the time you have spent here today. All of your information will be kept confidential, and if you have any questions, you can ask me today or contact our lead researcher, Dr. Benjamin Parrell at the number listed on the consent form. I will give you some time to look this over now. You can sign this copy, which we will keep with us, and the second copy here on the desk is yours to keep."

 

Assessments

  1. How to conduct a hearing screening (simplified) (we are not excluding people with mild-moderate hearing loss, but we are collecting the data) 
  2. Participant history
  3. CCAS assessment (cerebellar cognitive affective syndrome---use paper assessment) 
  4. Motor speech task 

 

Experiments

At the beginning of this session: 

  1. Open-back headphones should be connected at the speech computer
  2. Microphone should be connected at the speech computer

1. Reaching adaptation

Special consideration

This experiment is part of the cerebellar battery run in 2022-2023. For controls and patients, it is in the FIRST session.

In this battery, participants come in for multiple sessions and do multiple experiments in a row. As such, this is a bare bones document on how to run the experiment. Procedures for consent, hearing screening, awareness surveys, general equipment set up, and payment are not included in this document. See the documents below for how these procedures are implemented in this multi-study session: 

  1. Protocol for cerebellar battery: controls
  2. For patients

What's special about this experiment

This is a reaching experiment, with a separate interface computer. 

The lights should be OFF for this experiment so the participant can only see the cursor, not their actual hand. 

Prepping for participant

These should not have any occasion to change between studies, but it is good to check anyway. Note that you should run these checks on the REACHING computer which is on the cart (not Quiberon). 

Check sampling rate of tablet: 

  1. Click on Windows start button
  2. In the search at the bottom, type in Wacom tablet properties, and click on it when it appears
  3. Double click on the Intuos4 XL under "Device"
  4. Make sure it is in  to be in "Recognition Data" mode

Check refresh rate of monitor: 

  1. Click on the Windows Start button
  2. Type in settings, click when it appears
  3. Go to System > Display > Advanced Display Settings
  4. There should be two monitors in the dropdown to choose your display. Select Display 2: VG248
  5. The refresh rate is at the bottom. It should be 144.001 Hz 

Pre-experiment instructions

Start the experiment by typing in run_cerebReachAdapt_expt and hitting enter. Enter the required responses. Then place the pen on the tablet. 

Tell the participant: 

“The point of this experiment is to better understand how the brain controls reaching movements. We will be measuring how accurately you can reach to targets in different locations. Your data will be used as a normative baseline for future comparisons with neurological patients. So, please try your best to pay attention and follow all instructions.”

“In this experiment, you will be playing a game where you’ll be trying to make quick and accurate reaches to different target locations. For each trial, you will move the cursor to a home location in the center, which will be indicated with a circle. Then you'll make a quick reach to a target that appears somewhere on the screen. You should reach as quickly and accurately as you can, and slice through the targets rather than stopping at the target”

“Please take some time now to adjust the seat height and scoot in close to the work station. You will be making many reaches towards the edges of the tablet, and I want you to be able to do that without moving any parts of your body other than your arm. Keep the same posture throughout the experiment, and rest your other hand in your lap. Be sure to not swirl around in the chair.”

You will hold this pen at the red base with your dominant hand – maintain the same grip throughout the experiment.  (demonstrate)

There are no pre-planned breaks, but if you need to rest for a bit, just wait before moving the cursor to the center for the next trial.

“Do you have any questions?”

“I’ll give you a minute to get comfortable before turning out the lights.”

    After they are comfortable, turn off the lights, then press SPACE to start the experiment

    Baseline phase

    Tell the participant: “Move your hand to the start circle and wait for the target to appear. Your goal is to move your hand to the target. Reach through the target as accurately as possible in a quick straight line. Once you start moving, follow all the way through. If you hear a knocking sound, that means that you moved fast enough and far enough for a good trial.”

    Throughout the experiment, you should monitor how they are performing their reaches. You may have to issue corrections. Common issues:

    • When they get “Too Slow” message: “The too slow message means that you did not move fast enough for a valid trial. Remember to slice through the targets with your hand.” This is related to movement time, not reaction time 

    • If participant gets “slice through the target" message: “Remember to move accurately through the target in a quick, straight line.”

    • If they are picking up paddle: Please don’t pick up the paddle and just slide it along the table.

    Rotation phase

    Instructions will appear on the screen for the participant. In this phase, the cursor will move as they move their hand, but the direction that it moves will not correspond to where they are reaching. Their goal in this section is to move your hand directly to the target, ignoring the cursor. They should still reach as accurately as possible in a quick straight line and slice through the target. 

    If they have any questions, answer them. Press space when they are ready to begin. 

    Washout

    There are two washout phases. Participants will see instructions on the screen before each phase. Answer any questions they may have. When they are ready to continue, press space. 

    Washout without feedback: In this block they will no longer see the cursor. They should continue to move their hand directly through the target.

    Washout with feedback: In this block they will see the cursor again. They should continue to move their hand directly for the target. 

    If Matlab crashes during the experiment

    There is no restart script. This is a relatively short experiment, and can be restarted if necessary. 

    No special equipment setup needs to be completed between the two reaching studies. 

    2. Reaching compensation

    Special consideration

    This experiment is part of the cerebellar battery run in 2022-2023. For controls and patients, it is in the FIRST session. This is after cerebReachAdapt so you do not need to re-deliver general instructions on how to reach. 

    In this battery, participants come in for multiple sessions and do multiple experiments in a row. As such, this is a bare bones document on how to run the experiment. Procedures for consent, hearing screening, awareness surveys, general equipment set up, and payment are not included in this document. See the documents below for how these procedures are implemented in this multi-study session: 

    1. Protocol for cerebellar battery: controls
    2. For patients

    What's special about this experiment

    This is a reaching experiment, with a separate interface computer. 

    The lights should be OFF for this experiment so the participant can only see the cursor, not their actual hand. 

    Prepping for participant

    These should not have any occasion to change between studies, but it is good to check anyway. Note that you should run these checks on the REACHING computer which is on the cart (not Quiberon). 

    Check sampling rate of tablet: 

    1. Click on Windows start button
    2. In the search at the bottom, type in Wacom tablet properties, and click on it when it appears
    3. Double click on the Intuos4 XL under "Device"
    4. Make sure it is in  to be in "Recognition Data" mode

    Check refresh rate of monitor: 

    1. Click on the Windows Start button
    2. Type in settings, click when it appears
    3. Go to System > Display > Advanced Display Settings
    4. There should be two monitors in the dropdown to choose your display. Select Display 2: VG248
    5. The refresh rate is at the bottom. It should be 144.001 Hz 

    Pre-experiment instructions

    Start the experiment by typing in run_cerebReachComp_expt and hitting enter. Enter the required responses. Then place the pen on the tablet. 

    General instructions (not needed if running after cerebReachAdapt)

    Tell the participant: 

    “The point of this experiment is to better understand how the brain controls reaching movements. We will be measuring how accurately you can reach to targets in different locations. Your data will be used as a normative baseline for future comparisons with neurological patients. So, please try your best to pay attention and follow all instructions.”

    “In this experiment, you will be playing a game where you’ll be trying to make quick and accurate reaches to different target locations. For each trial, you will move the cursor to a home location in the center, which will be indicated with a circle. Then you'll make a quick reach to a target that appears somewhere on the screen. You should reach as quickly and accurately as you can, and slice through the targets rather than stopping at the target”

    “Please take some time now to adjust the seat height and scoot in close to the work station. You will be making many reaches towards the edges of the tablet, and I want you to be able to do that without moving any parts of your body other than your arm. Keep the same posture throughout the experiment, and rest your other hand in your lap. Be sure to not swirl around in the chair.”

    You will hold this pen at the red base with your dominant hand – maintain the same grip throughout the experiment.  (demonstrate)

    There are no pre-planned breaks, but if you need to rest for a bit, just wait before moving the cursor to the center for the next trial.

    “Do you have any questions?”

    “I’ll give you a minute to get comfortable before turning out the lights.”

      After they are comfortable, turn off the lights, then press SPACE to start the experiment

      Baseline phase

      Tell the participant: “Move your hand into the start circle and wait for the target to appear. Once the target appears, reach in a smooth motion to hit the target. Make sure that you slice through the target without stopping. You will see a cursor representing your hand position during the reach.”

      “The knock just means you reached far enough, but it doesn’t mean that you hit the target.”

      Throughout the experiment, you should monitor how they are performing their reaches. You may have to issue corrections. Common issues:

      • When they get “Too Slow” message: “The too slow message means that you did not move fast enough for a valid trial. Remember to slice through the targets with your hand.” This is related to movement time, not reaction time 

      • When they get "Too Fast" message: Try to make a smooth sweeping movement through the target. 
      • If participant gets “slice through the target" message: “Remember to move accurately through the target in a quick, straight line.”

      • If they are picking up paddle: Please don’t pick up the paddle and just slide it along the table.

      No feedback + jump

      Instructions will appear on the screen for the participant. In this phase, they will continue reaching in a smooth motion to hit the target. They will not see the cursor representing their hand position and the target may move. 

      Participants may anticipate the jump by starting slow and get messages about their movement speed. Encourage them to keep moving in one smooth sweep. 

      If Matlab crashes during the experiment

      There is no restart script. This is a relatively short experiment, and can be restarted if necessary. 

      Before typical production, participant will change computers to the speech computer. 

      3. Typical production (VOT) 

         

      What's special about this experiment

      This is a straight production experiment with no auditory feedback manipulations or anything special. Participants will see words on the screen and read them out loud. They do not need headphones. The study is meant to record typical productions of word-initial vocieless stops. 

      This experiment is part of the cerebellar battery being run at multiple sites. Participants will be coming for full-day visits. As such, they will likely have completed consent forms and participant histories at a different time, not after this individual experiment (at some future point there will be a document on how to wrangle a full- or multi-day participant visit at each site).

      Initial practice

      To run the experiment, type in the command run_cerebTypicalProduction_expt. When prompted, enter in the participant code and their height. 

      The participant will see the instructions on the screen. 

      The participants will first do a practice phase to get the first-time production of the word out of the way, and to get familiar with the words and the pacing of the experiment. You can also use this time to fine-tune the gain on the microphone.

      At the end of practice, ask the participant: "The rest of the experiment will go just like that. Are you feeling okay to move onto the task, or would you like to practice again?"  Make sure that the speed is okay for them. 

      If they would like to redo the practice, they can, but there is no benchmark they need to pass in order to move on. 

      Note: if they are having a hard time with how fast the trials are going, you can type 'redo' when prompted, and then type 'yes' when asked if the participant would like slower trials. This will slow the trial down by 0.5 seconds, and then run through practice again with the slower pace. 

      Main Phase

      "We'll now continue with the main experiment, which will take 5-10 minutes. Do you have any questions before we start?"

      If no questions, "Whenever you're ready, you may begin."

      Things to keep an eye on:

      • Monitor the amplitude level. You may need to adjust the microphone gain if the participant starts talking too loudly or too quietly. If the participant loses a lot of oomph in their voice, you can encourage them to take a break or drink some water. 

      Note: there is no pause function in this script, only the automatic breaks (every 20 trials). 

      Data transfer

      • If running at UW: the data will be saved in C:\Users\Public\Documents\experiments\cerebTypicalProduction\acousticData\. Copy the participant's folder into: \\wcs-cifs\wc\smng\experiments\cerebTypicalProduction\acousticData 
      • If NOT running at UW: the data will be saved into the folder generated by get_acoustSavePath('cerebTypicalProduction'). Copy the participant's folder into the path generated by get_acoustLoadPath('cerebTypicalProduction')

      If Matlab crashes during the experiment

      As of 9/26/2022 there is no restart script for this function. However, the entire experiment takes about 5 minutes so you can just rerun it. 

       

      Before paced VOT: 

      1. Put open-back headphones on participant
      2. Connect the output amp to the second input of the Scarlett
      3. Adjust volume to about 40% on the headphone channel going to the participant 

      4. Paced VOT production

          

      Special running circumstances

      This experiment is part of the cerebellar battery run in 2022-2023. For controls and patients, it is in the FIRST session.

      In this battery, participants come in for multiple sessions and do multiple experiments in a row. As such, this is a bare bones document on how to run the experiment. Procedures for consent, hearing screening, awareness surveys, general equipment set up, and payment are not included in this document. See the documents below for how these procedures are implemented in this multi-study session: 

      1. Protocol for cerebellar battery: controls
      2. For patients

      What's special about this experiment

      This is an unusual experiment in that it is not an altered auditory feedback study. In this study, participants will first see a word that they will use in that trial. They will then hear a series of clicks through the headphones. The clicks will start relatively slow but get faster through the trial. There will be a countdown for the first 3 clicks, so they can get used to the pace and prepare. Then the word will appear on the screen. At that point, they will repeat the word in time with the clicks. 

      ---

      This experiment is conducted using PTB so there is not the usual duplicated experimenter display. Instructions to the experimenter are displayed in Matlab's command window. 

      Hardware prep

      Special hardware requirements: 

      • Open-back headphones, or headphones that do not excessively muffle normal auditory feedback.  
      • A Scarlett with two input ports. Both Scarlett 2i2 and Scarlett Solo will support this experiment. On a Scarlett 2i2, both input ports are dual XRL+1/4". On a Scarlett Solo, the first input is XLR and the second is 1/4". 
      • 1/4" to 1/4" (6.3 mm) male to male connector cord. This experiment will work with a single stereo cord or using one side of dual input/output cords. If you are using a dual-dual cord, be sure to connect the same side on each end (red to red or black to black). 

      Before the experiment begins: 

      1. Change out the headphones in the experiment room for open-back headphones. These headphones have a mesh on the back of the earpieces rather than solid plastic. 
      2. Connect the fourth output port of the headphone amp to the second input port of the Focusrite using the 1/4" to 1/4" cord. (There will be one labeled "cerebPacedVot" near Burnham.) 
      3. Turn the gain of the fourth output port to 50%. Turn the gain of the second input port to 50%. Note: This is NOT the volume that participants will hear the metronome at. That volume will be controlled by the second headphone amp (the one that goes to the participant)

      To test the connection between the headphone amp and the Focusrite, in the Matlab command window, type: 

      test_outInputGain;

      The metronome channel should run up to amplitude of between +/- 0.1 to 0.2 or so. If you get any warnings about either channel, check that the gain is set to 50% on both the headphone and the Focusrite ports. If your figure shows clipping in the line labeled channel 2, turn down the gain on the headphone side if possible. If not, turn the gain down on the Focusrite side if possible. Then run the test again. 

      Setup diagram: 

      Diagram for hardware setup

      Experiment prelude 

      Start the experiment by typing in run_cerebPacedVot_expt in the command window and hitting enter.

      Hardware check

      You will first run through equipment checks and will have to answer a few questions about the equipment setup: 

      1. Is the participant wearing OPEN BACK headphones? (answer should be yes)
      2. Is there a 1/4" to 1/4" cord connecting the output amp back to the Focusrite second input? (answer should be yes) 

      Then, tell the participant: "You will hear some clicks shortly. This is just a check to make sure our equipment is working properly." 

      After the automatic check is done, check the output plotted in the figure. You should be able to clearly see the clicks in the signal, with a max amplitude of about +/- 0.2. If either channel is not receiving input, Matlab will inform you which channel is wrong. Double check that everything is connected and powered on and redo the hardware check. 

      Volume check

      After the hardware checks, you will do a volume check. Tell the participant: "We'll start with a volume check to make sure the metronome is at a good loudness. You'll want to be able to comfortably hear this while you are talking." 

      • If you are in the same room as the participant or can see them while the clicks are ongoing
        • tell them: "I'm going to play the metronome that you'll hear for the rest of the study. As it is playing, tell me if you want the volume lower or higher by pointing with your finger and I'll adjust it as we go. If it is okay, give me a thumbs up." 
      • If you are in a separate room and can't see them:
        • Err on the side of caution and turn down the gain on their headphone output before starting. 
        • Wait for the entire stimulus to play through. 
        • After the stimulus is done, ask the participant: "Is the volume of the metronome okay, or would you like it to be adjusted?" 

      The command window will then ask you if you would like to repeat the volume check. If you have to adjust the volume at all, press 1. If the volume was good, press 0. Note: The PTB screen is listening for this input, not Matlab, so you do not have to actually type in the command window. 

      Matlab will then prompt you to press the spacebar when you are ready to repeat. Repeat as many times as is necessary. 

      Practice

      This experiment will then move onto a practice phase so that participants can get used to the task, and so that you as the experimenter can see if the pacing clicks need to be slowed down. 

      Tell the participant: "We'll start with a practice section so you can get used to the task."  The remainder of the instructions will be on the screen. 

      When they finish the practice, the experiment will ask you if it seems like they need to go slower. This is a provision for the cerebellar patients, who often have a hard time repeating one syllable at speed. We do expect that people will have difficulty towards the end of the clicks, when they are speaking the fastest. However, if people consistently had difficulty near the beginning of the task (i.e., in within the first 5 clicks that they were speaking for), you can type in "slower" and the practice will repeat with a slightly slower pace. 

      If it seems that the participant runs out of breath too early, encourage them to try again, but this time take a deep breath before they start speaking

      If the participant did not have difficulty, ask them: "Are you comfortable with the task, or would you like to practice again?" If they express hesitation that the clicks are too hard at the end, you can tell them that that is okay, and they should just try their best. You can repeat practice as many times as they like. 

      Main Phase

      "We'll now move onto the main experiment, which will take about 10 minutes. You will be doing the same task you just practiced for the rest of the experiment. Do you have any questions before we start?"

      If no questions, "Whenever you're ready, you may begin."

      Things to keep an eye on:

      • Monitor the amplitude level. You may need to adjust the microphone gain if the participant starts talking too loudly or too quietly.
      • Make sure the participant is not running out of breath midway through. 

      This is a self-paced experiment, in that participants have to press the spacebar to start the next trial. So if participants need a little break, they can always stop at a trial. There is no additional pause function. 

      After the experiment

      "Great job! You are finished with the speaking portion of the experiment. I will be in to take off your headphones." 

      After the participant leaves

      1. Copy participant data from local drive to smng server (see PURPLE section below these numbered points)
      2. If you completed a hearing screening, copy the results from the local drive to the smng server
        1. There are folders on the desktop called "audiometer results - local" and "audiometer results - server". Drag the participant's audiometer results file from the "local" to the "server" folder.
      3. Fill out Lab Notebook, located in \\wcs-cifs\wc\smng\admin\
      4. Fill out check register or the extra credit register, located in \\wcs-cifs\wc\smng\admin\
      5. Double-check that the Experiment Checklist is complete
      6. Return the Experiment Running Sheet and the Checklist to the lab manager

      The data will be saved in C:\Users\Public\Documents\experiments\cerebPacedVot\acousticData\. Copy the participant's folder into: \\wcs-cifs\wc\smng\experiments\cerebPacedVot\acousticData 

      If Matlab crashes during the experiment

      [[ Gotta figure out what to do for this experiment. RK]] 

       

      This is a suggested time for a break, since you will have to do some equipment setup, and it is approximately the middle of the experiments. 

      1. Disconnect the output amp from the Scarlett
      2. Change out open-back headphones for closed-back headphones 
      3. Run check_audioLevels with noise alone and with sustained speech to adjust for speech experiment levels. 

      5. Formant adaptation and compensation

      Special circumstances: part of battery

      This experiment is part of the cerebellar battery run in 2022-2023. 

      • For controls, it is in the FIRST session (adaptation and compensation together). 
      • For patients at UW, it is in the SECOND session (adaptation and compensation together). 
      • For patients at UCSF, it is in the THIRD session (compensation only; adaptation is a separate experiment with MEG). 

      In this battery, participants come in for multiple sessions and do multiple experiments in a row. As such, this is a bare bones document on how to run the experiment. Procedures for consent, hearing screening, awareness surveys, general equipment set up, and payment are not included in this document. See the documents below for how these procedures are implemented in this multi-study session: 

      1. Protocol for cerebellar battery: controls
      2. For patients

      What's special about this experiment 

      This experiment has the option to run just 'adapt' (adaptation experiment), just 'comp' (compensation experiment), or 'both' (adaptation followed by compensation). The adaptation experiment can also be run by itself with the option 'adapt_short', which has 80 trials instead of 200, and stops before the passage reading portion. At UW, we will use the option 'both'. At UCSF, session three will use 'comp'. 

      You will need the RAINBOW PASSAGE for this experiment if you are doing the adaptation portion. (You don't need this if doing 'adapt_short'.) In the adaptation portion, participants do one run, then take a break to read the rainbow passage, and then do a second run. 

      Prepping for participant

      This experiment uses: 

      1. Audapter
      2. Focusrite Scarlett
      3. Closed-back headphones
      4. Microphone (head-mounted or stand) 
      5. Rainbow passage (adaptation experiment only) 

      Before running this experiment, you need to check the audio levels of the noise over the headphones, as well as the noise + participant speech (if at UW, see this document for more details.)

      • Controls at UW: You will have to recheck the levels after running the previous experiment (pacedVOT)
      • Patients at UW: this is the first experiment of session 2, so conduct the equipment checks as part of session preparation. 

      Pre-experiment instructions

      Tell the participant: "In this experiment, you will be speaking and listening. When you see the word on screen, read it out loud, just like you would normally say it. You will be speaking into the microphone on the desk, and you will hear your own voice and some noise played back through these headphones. Do you have any questions?

      • If adapt: "This experiment will have four different sections." 
      • If adapt_short: "This experiment will have two different sections."
      • If comp: "This experiment will have three different sections."
      • If both: "This experiment will have six different sections." 

      Type run_cerebAAF_expt into the command window and hit enter. You will be prompted to type in adapt, adapt_short, comp, or both. Type in the appropriate experiment for your site and hit enter. 

      Pretest phase: LPC order 

      In all cases, the first task that will come up is an LPC order check. 

      "For this first section, you will see one word at a time appear on the screen." 

      Use the check_audapterLPC GUI to find an appropriate LPC for this participant. If you need to fix the vowel boundaries, use the Change OSTs button.

      Adaptation section

      If you are running "adapt" or "both" or "adapt_short", the adaptation section will be next. 

      Tell the participant: "This next speaking section lasts about 10 minutes. Read each word when it appears on screen." 

      For adapt_short: After 80 trials, the experiment will conclude.

      For both or adapt: Run first 160 trials. After trial 160, there will be a message on screen saying “Time for a break,” and the command window will say “Pausing experiment for passage reading.”

      Tell the participant: "In this next section, you'll be reading a short passage. Please pick up the sheet of paper on the table in front of you titled 'The Rainbow Passage'. You'll be reading the text on this paper out loud. While you speak, you will hear your own voice and noise through the headphones. Do you have any questions before we start?

      Great. When you are ready, let me know and I will start our recording system. When you hear the noise start through the headphones, you can go ahead and read the passage out loud.."

      Follow the command window prompts to start the Audapter feedback and stop it once they’ve read the passage.

      Tell the participant: There are a few more speaking trials with single words again.

      Start the “adaptation run 2”.

      Compensation section

      If you are running "comp", this will be first. If you are running "both", this will be second. 

      This experiment starts with duration training to get the participant saying the vowel at an optimal duration where feedback can be used, but is not too unnaturally slow. 

      Tell the participant: 

      Next, we will do some practice speaking trials where we will try to get you to talk at a particular speed. Read the words aloud as they appear, just as you did previously. Try to speak in a way that is a bit more stretched out than how you would normally talk, but still somewhat natural. For example, instead of saying “head” I might say “heeeead.” After you say each word, you will get some feedback about the speed of your speech. The computer may tell you to speak slower or faster. If you spoke at a good speed, you will see a green circle.

      If the participant had trouble, give them some tips. This tasks is challenging for many people. Just remember that the most important part is that their vowel is long enough that they can hear their own vowel formants and compensate for them.

      You should try to have them get 7 out of 10 or better. If they’ve got the gist but the automated system isn’t cooperating, you can tell them, “you’re doing great – just keep doing that and ignore what the computer says.”

      If they got less than 7/10, choose redo. Once they got 7/10 or better, tell the participant: You’ll now be doing the same task with more trials. You will still get feedback about the speed of your speech. This section lasts about 10 minutes.

      Choose move on.

      If Matlab crashes during the experiment

      As of 10/21/2022 RK is unaware of a restart script for this experiment. 

       

      No equipment adjustments are needed before the formant JND study. 

      6. Formant JND

      Special circumstances: part of battery

      This experiment is part of the cerebellar battery run in 2022-2023. 

      • Patients at UW: Both f0 and f1 are in session 2
      • Controls at UW: Both f0 and f1 are in session 1
      • Patients at UCSF: Both f0 and f1 are in session 2

      In this battery, participants come in for multiple sessions and do multiple experiments in a row. As such, this is a bare bones document on how to run the experiment. Procedures for consent, hearing screening, awareness surveys, general equipment set up, and payment are not included in this document. See the documents below for how these procedures are implemented in this multi-study session: 

      1. Protocol for cerebellar battery: controls
      2. For patients

      What's special about this experiment

      This is a perceptual experiment that first records tokens from the participant, and then uses those tokens as the basis of the perceptual stimuli. 

      This code can run in either f0 mode (pitch perception) or f1 mode (formant perception). You will have to specify which one at the beginning of the experiment. 

      Prepping for participant

      Participant will need: 

      1. Microphone (for recording initial tokens)
      2. Headphones --- do not put the headphones on until after the participant is done recording tokens. 
      3. Keyboard (for responses) 

      Pre-experiment instructions

      Tell the participant: "In this experiment there will be two sections. In the first section, you will say {ba / bed} a few times and we will record your speech. In the second section, you will hear three different tokens and say which are most similar to each other. There will be a short pause between sections while I get things set up. Do you have any questions?" 

      Stimulus setup

      Type run_cerebJND_expt in the Matlab command window and press enter. 

      You will be prompted to enter in either f0 or f1.

      • If you are doing PITCH perception, type f0.
        • You will be prompted to enter which pitch shifting algorithm you want. [[ANNEKE: WHICH ALGORITHM TO USE IN WHAT CIRCUMSTANCES?]]
      • If you are doing FORMANT perception, type f1. 
      1. The participant will be prompted to say “ba” (f0) or "bed” (f1) for five trials. They can just talk normally. 

      2. After they are done recording the tokens, you will get the option to either re-record (if you think that none of the trials were a good representative of the participant's "clean" speech), or choose the trial that you want to base the continuum on. 

      3. If you “move on,” then the script will automatically trim each trial to the speech portion and play each through your computer’s default speakers. (If you want to listen with headphones, I recommend setting your Windows default sound output device to the desktop computer, and then plugging in headphones.)

      4. If one of the trials seems like a good candidate for creating the continuum, enter that trial's number (1-5) in the command window and hit enter. 
        1. If you want to hear them all again, press 9
        2. If they all actually sound bad, press any number between 1 and 5, then type "yes", then type "redo" when given the option

      5. Once you pick a trial that sounds okay, you will see a waveform showing you how the automatic trimming was done. You cannot change the trimming, but it might inform your decision of whether that’s a good trial or not. If it looks good, say Yes. The trimming should include the entire vowel. 

      6. Now, you’ll be given a final choice about if you want to keep that as your final continuum basis, or if you want to record new speech tokens. (redo/move on)
      7. Once you move on, the computer will begin generating tokens with slightly different f0 or f1. Tell the participant: "Okay, you can relax for a few minutes while we generate the sounds that will be used in the next section. This may take a couple of minutes." 

      Perceptual staircase (main task) 

      After the stimuli have been generated, you will move onto the actual perceptual section of the participant. Tell the participant: 

      "In this section, you will hear three different tokens. Your task is to say if the second token is most like the FIRST token, or the LAST token. If the second token is most like the FIRST token, press F. If the second token is most like the THIRD token, press J. Do you have any questions?" 

      Answer questions, then tell the participant: "First we will start with a practice round just so you can get used to the task." 

      The practice phase has very large stimulus differences and will take about 1 minute (10 trials). 

      If they are okay to move on, tell the participant: "We will now move onto the main section. The task may get very difficult and you might not be sure what is the right answer. That is okay, just take your best guess. This will take about 10 minutes." 

      Start the main task. It maxes out at 100 trials or 32 reversals, whichever comes first.

      If Matlab crashes during the experiment

      As of 10/21/2022 there is no remedy for a crashed experiment. 

       

      You will need to take the participant's headphones off to start the pitch compensation study. The first section in pitch compensation is a calibration phase and the participant can hear themselves in free field. Have the participant put the headphones back on when they start the main phase of the study. 

      (Note: this is to avoid the participant having to hear you listen to various different samples of the pitch shifting algorithm; nothing bad will happen if you accidentally leave the headphones on. It is much worse to not put them back on afterwards) 

      7. Pitch compensation

      Special circumstances: part of battery

      This experiment is part of the cerebellar battery run in 2022-2023. For controls, it is in the FIRST session. For patients, it is in the SECOND session. 

      In this battery, participants come in for multiple sessions and do multiple experiments in a row. As such, this is a bare bones document on how to run the experiment. Procedures for consent, hearing screening, awareness surveys, general equipment set up, and payment are not included in this document. See the documents below for how these procedures are implemented in this multi-study session: 
      1. Protocol for cerebellar battery: controls
      2. For patients

      What's special about this experiment

      This is a experiment examining compensation to upwards and downwards pitch shifting of the speaker's own voice.

      It needs a calibration part during with the f0 of the speaker is calculated and the correct shifting paradigm is selected. If the participant has already run the adaptation version of this experiment (pitchAdaptRetest), the experiment will take the stored information about the algorithm and not run the calibration phase again. This calibration part is included in both scripts: run_pitchAdaptRetest_expt.m and run_pitchComp_expt.m.

      Prepping for participant

      • closed back headphones
      • microphone
      • Audapter
      • FocusRite

      Pre-experiment instructions

      Restart MATLAB: For an unknown reason, the off-line pitch shifting doesn't always work after running another Audapter script. 
      Enter run_pitchComp_expt in the command line. 

      Tell the participant: "This experiment will have a calibration phase, followed by one main section. You will be reading the vowel "ah" from the computer screen and listening to your speech over headphones. 

      Don’t hesitate to ask questions or raise concerns at any point."

      Phase 1: Calibration

      "We are first going to calibrate our equipment to your voice. In this section, some instructions will appear on the screen. Take your time to read it and ask questions if you are not sure what to do. After that, you will see the word “ah” appear on the screen. When you see the word on the screen, read it out loud, just like you would normally say it, but try to keep a constant pitch. After you speak, there will be a short break while I calibrate our equipment. Do you have any questions?"
      Instructions for calibration
      1. Calculate the f0 of the speaker: The participant is instructed to say “AH” (The instructions also appear on the screen of the participant). You see the waveform in a figure on the experiment computer. You will be prompted ‘Is the recorded sample good?’ at the command line. 

        • If the recording looks okay and there is no clipping visible in the figure (see below for an example), press ‘y’ and hit enter.

        • If there is clipping visible in the figure (see below), reduce the microphone gain, then press ‘n’ and hit enter. The whole process will repeat. Repeat as needed until the audio looks good and no clipping is visible in the figure.

        • If there is a problem with the audio signal (participants didn’t speak, said the wrong thing, coughed, etc.), press ‘n’ and hit enter. The whole process will repeat. Repeat as needed until the audio looks good and no clipping is visible in the figure.

           Graphical user interface, applicationDescription automatically generated

      Fig 1: Two examples of speech waveforms. In the image on the left, the waveform falls between -1 and 1 (indicated by the red lines). This is an example of an appropriately set microphone gain. In the image on the right, the waveform is “clipped”—it is cut off by the -1 and 1 boundaries. In this case, the microphone gain needs to be reduced. If the microphone gain is too low (not shown), the waveform will have a very small range. Aim to use most of the range between -1 and 1 without any clipping.

      1. Enter a percentage value for upper and lower boundaries for pitch tracking: this value will be used to calculate the upper and lower boundaries for tracking the vocal pitch. The closer the boundaries are to the f0, the better the estimate of the pitch and the better the shifting algorithm works. However, there will be problems if the actual pitch goes outside these boundaries. The exact boundaries that work best depend on the speaker. The default value is 20%, and will work well for most speakers. In this next section, we will set the pitch boundaries. After inspecting 3 figures with this percentage (see below), you can change the value. In most cases, the 20% value is the right one. There might be pitch contours that are not as steady as in the given example and the green and blue lines touch the red lines ones in a while. In this case, the boundaries should be a little bit larger, such as 25% or 30%.

        • You will see a figure pop up like this: NOTE: IF THE UPPER PANEL IS EMPTY, RESTART MATLAB BECAUSE PITCH SHIFTING IS NOT WORKING.

      Graphical user interface, text, applicationDescription automatically generated

      • Click “enter” for the next figure that shows the pitch contour, shifted up (green line). Press ENTER again.

      • The next slide is the pitch contour, shifted down.

      • Press “enter” again.

      • Confirm percentage: Both the green and blue lines should be contained within the red horizontal borders in all the three figures. if the participant said 'bod,' it's OK if a portion at the end is outside the red borders. If this is the case, enter ‘y’ in the command line, after prompted: 'Is the percentage good?', {'y', 'n'}.

      • Examples of boundaries: In the figures below, you see two bars: one bar shows the pitch shifting output of Audapter; the lower bar shows the extracted pitch from the waveform. These can differ slightly but should not differ to a large extent. The default boundary value is 20% (example pitch shifted up first figure); the 10% boundaries in the example below are too narrow. In case the speaker has an unstable pitch and 20% is too narrow, the boundaries must be adjusted to a larger value, e.g., 25% or 30%. In general, 20 % is the lowest value, and boundaries need only be adjusted to a larger value. The 10% figure is for demonstrating how a too-narrow band looks like. The final disturbances are caused by the ‘d’ in the word “bod” and can be ignored.

      Graphical user interface, application, table, ExcelDescription automatically generated

      1. Select pitch shifting manner: Next, the AlgorithmSelect window opens. Here, you select one of three algorithms that Audapter can use to shift the pitch of the voice. There are nine buttons on the left of the screen. When you click on one of these buttons, it will play back an audio sample demonstrating how the participant’s voice will sound with that combination of pitch shifting direction (up, down, none) and pitch shifting algorithm on the right (pp_none, pp_peaks, or pp_valleys). It’s not important to understand the differences in the pitch shifting algorithms; they just indicate certain ways to shift a pitch up and down. Listen to the different algorithms to see which one sounds the most natural. In most cases, this will be pp_none. Then click the radio button on the right corresponding to the best algorithm and click “Select Algorithm”.
        Graphical user interface, applicationDescription automatically generated


      Main experiment

      For the main experiment, it is important the participant wears the headphones to receive altered auditory feedback. Check whether the participant is wearing the headphones. If not:

      "Please put the headphones on now." [make sure they are on correctly]

      "On each trial, you will say the word “ah” for about a second like you just did. Start when the text prompt appears on the screen and keep going until the text prompt disappears. Try to keep the pitch of your voice at a constant, monotone level. So, try not to raise your pitch or lower your pitch. You will be given feedback on the screen if you say “ah” for less than the required time or speak too quietly. Just continue with the task and try to adjust your speech accordingly. Before the actual session starts, we will do some practice so you can get used to the task. You will have several breaks throughout the experiment.

      Do you have any questions?

      The experiment starts after you press "enter".

      What to monitor for:

      During the study, monitor for correct loudness in the figures that appear for each trial. In case the signal is too loud, adjust the microphone gain.

      If Matlab crashes during the experiment

      As of 10/24/2022 there is no restart script for this experiment. 

      No equipment adjustments are needed before the pitch JND study. 

      8. Pitch JND

      See section 6 (formant JND; use option f0). 

       

      After the participant leaves

      • Transfer data from all experiments to their respective folders on the server
      • Return hearing assessment equipment to 544 if an experiment is scheduled there between now and the next cerebellar battery participant
      • Fill out lab notebook for each experiment
      • Return consent form and assessments to lab manager

       

       

      Session 2

      Before the participant arrives

      • Check audio levels for general level accuracy
      • Need tapping interface for this session 
      • Only closed-back headphones 

      General preparation information

       

      Before the participant arrives

      Enter check_audioLevels then click Part A. (full check_audioLevels guide here)

      1. Turn on SPL meter and press Fast/Slow and Level.
      2. Adjust the microphone gain on the black amplifier (the "Your Headphones" and "pp headphones" knobs) until the SPL meter reads 59.5 to 60.5 dB.
      3. Press spacebar to stop.
      4. Turn off SPL meter

      Set out a paper copy of the appropriate consent form (highlighted above) on the participant's desk.

      Open the lab email and monitor for participant's arrival.

      How to access the lab email

      1. In the web browser, open your personal UW email inbox through Outlook. You can do this through MyUW if needed.
      2. In the top right corner, click the circle with your profile picture/initials.
      3. Click Open another mailbox
      4. Enter speechmotor@waisman.wisc.edu and click Open

      If it's the appointment time but the participant hasn't called/emailed yet

      1. Find the participant's phone number.
        1. If you have access to Qualtrics (ie if you're on this list), use the Finding Experiment Running Info KB page to find the participant and phone #
        2. If you don't have Qualtrics access, Slack the lab manager and ask for the phone number
      2. Wait 5 minutes past the scheduled appointment time, then call the participant.
        1. If they are at the building, on their way, or can arrive really soon, continue with the appointment
        2. If we have back-to-back participants such that them arriving late is going to mess up another participant, just tell them that the lab manager will email to reschedule
        3. You don't need to leave a message

      When the participant arrives

      The participant will notify you of their arrival by either emailing speechmotor@waisman.wisc.edu or calling the phone in 544. When they contact you:

      • Confirm they are at the North Entrance (they are next to the soccer fields)
      • Tell them you will meet OUTSIDE the door which says Lobby Clinics
      If the participant arrived but you can't confirm that the participant is at the correct entrance, write down their phone number before going downstairs. (The lab manager can give you their phone # if you don't have it.) You may need to check the North Entrance and give them directions by phone, or walk around.

      Meet the participant downstairs

      Grab the clipboard with the SPEECH STUDY sign, greet them at the entrance, go upstairs, and direct them to 544A.

      Participants can remove their mask once in the experiment room. They will need to remove their mask during speaking tasks.

      FAQ

      Participants are not required to wear a mask on the path we take them from the entrance to the exam room. We are not permitted to exclude participants who don't wear a mask or aren't vaccinated. Talk to the lab manager if you have concerns about this (university-level) policy.

       

      Experiments

      1. Pitch adaptation

      Special circumstances: part of battery

      This experiment is part of the cerebellar battery run in 2022-2023. For controls and patients, it is in the SECOND session. 

      In this battery, participants come in for multiple sessions and do multiple experiments in a row. As such, this is a bare bones document on how to run the experiment. Procedures for consent, hearing screening, awareness surveys, general equipment set up, and payment are not included in this document. See the documents below for how these procedures are implemented in this multi-study session: 

      1. Protocol for cerebellar battery: controls
      2. For patients

      What's special about this experiment

      This is a experiment examining how speakers adapt their pitch in the upward opposite direction in response to downward pitch shifting of their own voice.

      At the start of the experiment, there is a calibration phase where the f0 of the speaker is extracted and during which the correct pitch shift paradigm is selected. Extracting the f0 is done before each session and experiment; the latter calibration part is only included before the experiment that is run first; this can either be the adaptation or compensation study, depending on the set_up of the experiment. The algorithm setting is taken from that the first collected data instead, and calibration is not redone. 

      The adaptation study consists of 1 to 4 sessions that are in a fixed order. In case you select more than one session (4 sessions consists of 1. pitch shifting, 2. control session (unperturbed feedback), 3. pitch shifting, and 4. control session), each phase requires an updated initial f0, because the pitch of the voice often declines or increases naturally, even in the unperturbed sessions. 

      Prepping for participant

      • closed back headphones
      • microphone
      • Audapter
      • FocusRite
       

      Pre-experiment instructions

      RESTART MATLAB, because sometimes the pitch shifting paradigm doesn't work correctly if other Audapter scripts have been running.

      Enter run_pitchAdaptRetest_expt in the command line and press "enter". 

      After entering the ID of the participant and their height, there is an option to select the number of sessions.

      enter 1: only one session that shifts the pitch downward 1 semitone.

      enter 2: first a pitch shifted session followed by a "no shift" session (control).

      enter 3: pitch shifted session, control, pitch shifted session.

      enter 4: pitch shifted session, control, pitch shifted session, control.

      Instructions when running 4 sessions:

      "In this experiment, there will be 4 sections. Brief breaks between all these sections are included while the experimenter initiates the next part and there will be breaks during the sessions itself.

      During each of the sections, you will be reading words off the computer screen and listening to your speech over the headphones. 

      Don’t hesitate to ask questions or raise concerns at any point."

      Instructions when running only 1 session:

      "This experiment consists of 1 session. There will be breaks during the session itself. 

      You will be reading words off the computer screen and listening to your speech over the headphones. 

      Don’t hesitate to ask questions or raise concerns at any point."

      Calibration

      Note: part of this calibration section may not occur if this participant has already done pitchComp; in this case, only the initial f0 will be calculated again.

      We are first going to adjust our script to match your voice. In the next section, some instructions will appear on the screen. Take your time to read it and ask questions if you are not sure what to do. After that, you will see the word “bod” appear on the screen. When you see the word on the screen, read it out loud, just like you would normally say it, only a slightly bit longer. Try to keep the pitch of your voice constant. After you speak, there will be a short break while I calibrate our equipment

      Do you have any questions?"

      Calibration instructions
      1. The experimenter presses "enter" to start the recording after the instructions on the screen of the participant.

      2. Calculate the f0 of the speaker: The participant is instructed to say “Bod” (The instructions also appear on the screen of the participant). You see the waveform in a figure on the experiment computer. You will be prompted ‘Is the recorded sample good?’ at the command line. 

        • If the recording looks okay and there is no clipping visible in the figure (see below for an example), press ‘y’ and hit enter.

        • If there is clipping visible in the figure (see below), reduce the microphone gain, then press ‘n’ and hit enter. The whole process will repeat. Repeat as needed until the audio looks good and no clipping is visible in the figure.

        • If there is a problem with the audio signal (participants didn’t speak, said the wrong thing, coughed, etc.), press ‘n’ and hit enter. The whole process will repeat. Repeat as needed until the audio looks good and no clipping is visible in the figure.

           Graphical user interface, applicationDescription automatically generated

      Fig 1: Two examples of speech waveforms. In the image on the left, the waveform falls between -1 and 1 (indicated by the red lines). This is an example of an appropriately set microphone gain. In the image on the right, the waveform is “clipped”—it is cut off by the -1 and 1 boundaries. In this case, the microphone gain needs to be reduced. If the microphone gain is too low (not shown), the waveform will have a very small range. Aim to use most of the range between -1 and 1 without any clipping.

      1. Enter a percentage value for upper and lower boundaries for pitch tracking: this value will be used to calculate the upper and lower boundaries for tracking the vocal pitch. The closer the boundaries are to the f0, the better the estimate of the pitch and the better the shifting algorithm works. However, there will be problems if the actual pitch goes outside these boundaries. The exact boundaries that work best depend on the speaker. The default value is 20%, and will work well for most speakers. In this next section, we will set the pitch boundaries. After inspecting 3 figures with this percentage (see below), you can change the value. In most cases, the 20% value is the right one. There might be pitch contours that are not as steady as in the given example and the green and blue lines touch the red lines ones in a while. In this case, the boundaries should be a little bit larger, such as 25% or 30%.

        • You will see a figure pop up like this: IMPORTANT: WHEN THE TOP PANEL DOESN'T SHOW UP/ IS EMPTY, RESTART MATLAB:

      Graphical user interface, text, applicationDescription automatically generated

      • Click “enter” for the next figure that shows the pitch contour, shifted up (green line). Press ENTER again

      • The next slide is the pitch contour, shifted down.

      • Press “enter” again.

      • Confirm percentage: Both the green and blue lines should be contained within the red horizontal borders in all the three figures. if the participant said 'bod,' it's OK if a portion at the end is outside the red borders. If this is the case, enter ‘y’ in the command line, after prompted: 'Is the percentage good?', {'y', 'n'}.

      • Examples of boundaries: In the figures below, you see two bars: one bar shows the pitch shifting output of Audapter; the lower bar shows the extracted pitch from the waveform. These can differ slightly but should not differ to a large extent. The default boundary value is 20% (example pitch shifted up first figure); the 10% boundaries in the example below are too narrow. In case the speaker has an unstable pitch and 20% is too narrow, the boundaries must be adjusted to a larger value, e.g., 25% or 30%. In general, 20 % is the lowest value, and boundaries need only be adjusted to a larger value. The 10% figure is for demonstrating how a too-narrow band looks like. The final disturbances are caused by the ‘d’ in the word “bod” and can be ignored.

      Graphical user interface, application, table, ExcelDescription automatically generated

      1. Select pitch shifting manner: Next, the AlgorithmSelect window opens. Here, you select one of three algorithms that Audapter can use to shift the pitch of the voice. There are nine buttons on the left of the screen. When you click on one of these buttons, it will play back an audio sample demonstrating how the participant’s voice will sound with that combination of pitch shifting direction (up, down, none) and pitch shifting algorithm on the right (pp_none, pp_peaks, or pp_valleys). It’s not important to understand the differences in the pitch shifting algorithms; they just indicate certain ways to shift a pitch up and down. Listen to the different algorithms to see which one sounds the most natural. In most cases, this will be pp_none. Then click the radio button on the right corresponding to the best algorithm and click “Select Algorithm”.

      Graphical user interface, applicationDescription automatically generated

      Main experiment

      Check if the speaker is wearing the headphones. If not wearing:

      "Please put the headphones on now." [make sure they are on correctly]

      "On each trial, you will see the word “bod” appear on the screen, just like before. When you see the word on the screen, read it out loud, just like you would normally say it. Keep the pitch of your voice as constant as possible, so it sounds monotone."

      Experimenter can show this by saying "bod" with no pitch fluctuations/monotone.

      You will be speaking into the microphone, and you will hear your own voice played back to you through the headphones. During some trials, you will hear only noise and not your own voice.  Try to say the word “bod” the same as you would when you can hear yourself. There will be a break every 20 trials. If you need to take a break at some other time, like to cough or take a drink of water, you can press p on the keyboardYou will get some trials to practice."

      "Do you have any questions before we start?"

      Things to keep an eye on:

      • Monitor the amplitude level. You may need to adjust the microphone gain if the participant starts talking too loudly or too quietly. You or the participant can press "p" to pause the study and talk to the participant or take an extra break.
      • Monitor the boundaries: The boundaries can be adjusted mid-session by pressing "b" and entering a new percentage.

      If Matlab crashes

      As of 10/24/2022 there is no restart script for this experiment. 

      2. Time adaptation

         

      Special running circumstances

      This experiment is part of the cerebellar battery run in 2022-2023. For controls, it is in the SECOND session. For patients, it is in the THIRD session.

      In this battery, participants come in for multiple sessions and do multiple experiments in a row. As such, this is a bare bones document on how to run the experiment. Procedures for consent, hearing screening, awareness surveys, general equipment set up, and payment are not included in this document. See the documents below for how these procedures are implemented in this multi-study session: 

      1. For controls
      2. For patients

      Pretest phase: setting the OST and PCF files

      "This experiment has one short section and then one long section.There will be breaks between sections while I set up the next part.

      For this first section, you will see one word at a time appear on the screen. When you see the word on screen, read it out loud, just like you would normally say it. You will be speaking into the microphone on the desk, and you will hear your own voice and some noise played back through these headphones. Do you have any questions?"

      MATLAB command: run_cerebTimeAdapt_expt

      The first phase of this experiment both gets the participant used to how the study is going to go, and also records some initial tokens so that you can set the OST file (see this article for information on how OST files work). The OST file in this experiment has three OST transitions: 

      1. 0 to status 2: detects the onset of the vowel 
      2. 2 to status 4: detects the onset of /s/ in "best" 
      3. 4 to status 6: detects the onset of silence, here the /t/ of "best" 

      The participant should say the word like it is the answer to something, "Best." or "Best!" If they are saying it like it is an item in a list ("best..."), or like a question, encourage them to change how they say it by demonstrating. You may have to correct them again during the experiment. The key is that the vowel should not be too drawn out. 

      The initial pretest phase has 9 trials. After the trials are over, the GUI audapter_viewer* will open with the trials. Use audapter_viewer to tweak the OST** file if necessary (see this article on how to use audapter_viewer). These segment transitions are quite robust so you will likely not need to change much; you may need to tweak parameters, but it is highly unlikely that you will need to tweak heuristics.

      *See this guide on how to use audapter_viewer
      **See this guide on how to set OSTs

      Important notes about OST status setting

      • For this experiment, the most important OST status is 2, which is the onset of the vowel.
        • This is the event that triggers the time warping event, so it should be accurate.
        • It is better to have it slightly late than too early.
        • The default heuristic looks for RMS intensity to surpass a certain threshold, so this should be fairly reliable, since the participant is not saying anything before the target word. 
      • Status 6 is the next most important.
        • It must be reliably detecting the silence in the /t/ closure. It is better to err on the side of late than early
        • This event is important because the interval between status 2 and status 6 is what gets fed into the PCF file so that the time warping event does not end too soon.
        • The default heuristic looks for RMS intensity to go below a certain threshold, which should also be fairly reliable. However, it cannot be all the way at the end of the trial because the time warp event has to end before the end of the trial otherwise Matlab will crash. 
      • Status 4 is not directly used in the experiment, but you should try to have it relatively accurate.
        • OST statuses must occur linearly, so if the tracking never hits status 4, status 6 will also never occur.
        • The default heuristic for this looks for a rise in the RMS ratio, i.e. high intensity in higher frequencies. 

      When you are satisfied with the OST tracking, click "Continue/Exit". You will get a dialog asking if you want to save; click "Save and Exit". This will ensure that the new parameters are saved both into the OST file and into the experiment file for that participant. Then a dialog will pop up to make sure it is being saved in the right place. The automatically selected option should be the local folder for that participant/experiment; if it is not, you can find another folder instead. 

      • If you changed the OST at all, you will automatically redo the practice. If this is the case, tell the participant, "We're going to try that one more time." If you need to provide any additional guidance, such as speaking more naturally, you can tell them that as well. 
      • If you did not change the OSTs, you will be able to move onto the next phase. 

      Segmentation (information for PCF) 

      After the OSTs are set, another GUI will pop up for you to segment the most recent practice trials. There will be two user events (denoted by cyan lines): one corresponding to where OST status 2 was for that trial, and one corresponding to where OST status 6 was for that trial. They will be labeled as "vStart" and 'tStart' respectively. Click and drag on the lines to adjust these events to correspond with the actual location of the start of /E/ and the start of the /t/ closure for the trial, then press 'continue' to continue to the next trial. 

      If you messed up on one of the events, you can click "previous" to go back to that trial (unless it was the last trial). 

      The information about the interval between vStart and tStart will be automatically fed into the PCF file (configures perturbation). 

      (For more detailed instructions on how to use audioGUI, see this article.) 

      When you are done with the last trial, a figure will pop up and you will be asked if you want to accept that duration of durHold. The dots in the figure should be roughly below the line. If not, click "no" and redo the practice phase again. 

      Main Phase

      "We'll now begin the main section, which will probably take about 10 minutes. Just like in the practice phase, you'll see a word on the screen, and then say that word like you normally would. Do you have any questions before we start?"

      If no questions, "Whenever you're ready, you may begin."

      Things to keep an eye on:

      • The experiment controller screen will show you the OST statuses for each trial. Keep an eye on these. If they start looking consistently off, you can adjust the OSTs in the middle of the experiment. To do this, press 'a'. At the top of the next iteration of the trial loop, audapter_viewer will open again and you can adjust the statuses by looking at the last trials from the experiment. Because the statuses are relatively robust for this word, you will probably not have to do this, but if the speakers are particularly variable you may have to. 
      • In addition, keep an eye on the participant's loudness. You may have to adjust the gain a bit over the course of the experiment. However, if their voice strength fades dramatically, they may need to take a bit of a break instead. 

      If you need to pause for any reason (other than adjusting OSTs), press the 'p' key on the keyboard. The experiment will pause at the top of the next trial loop. 

      When the experiment is done

      1. Move the experiment data from the local computer to the server
        1. If running at UW: the data will be saved in C:\Users\Public\Documents\experiments\cerebTimeAdapt\acousticData\. Copy the participant's folder into: \\wcs-cifs\wc\smng\experiments\cerebTimeAdapt\acousticData
        2. If NOT running at UW: the data will be saved into the folder generated by get_acoustSavePath('cerebTimeAdapt'). Copy the participant's folder into the path generated by get_acoustLoadPath('cerebTimeAdapt')
      2. (At UW) Fill out the Lab Notebook on the server, located at \\wcs-cifs\wc\smng\admin\ 

      If Matlab crashes during the experiment

      As of 10/14/2022 there is no restart script for this experiment. 

       

      3. Time compensation

      Special circumstances: part of battery

      This experiment is part of the cerebellar battery run in 2022-2023. 

      • Patients (UW and UCSF): Session 3
      • Controls (UW): Session 2

      In this battery, participants come in for multiple sessions and do multiple experiments in a row. As such, this is a bare bones document on how to run the experiment. Procedures for consent, hearing screening, awareness surveys, general equipment set up, and payment are not included in this document. See the documents below for how these procedures are implemented in this multi-study session: 

      1. Protocol for cerebellar battery: controls
      2. For patients

      What's special about this experiment

      This experiment uses formant clamping to simulate acceleration, deceleration, undershoot, and overshoot of the vowel /ai/. 

      This is very reliant on accurate OST tracking from Audapter. For this, we individualize OST parameters for each participant using an in-house GUI called audapter_viewer. Here is a video guide for how to use audapter_viewer. If you would like more information about the particular heuristics that are used for OST tracking, see this guide

      Note: You MUST use UW's version of Audapter (and accompanying Matlab code) for this!! Other versions do not have formant clamping. The experiment code does a hard check for the formant clamping before starting so you will find out quickly if your Audapter is not set up right. 

      Prepping for participant

      Before running the participant, determine if they are a speaker with monophthongization of the target vowel or not. Speakers with monophthongization cannot participate in this experiment because it renders the manipulations null!

      1. Monophthongization of /ai/ is a typical feature of Southern American English and Black English, though not all speakers of these dialects will necessarily have it (depending on their other linguistic experiences)

      2. Monophthongization means the vowel in “buy” or “guide” will sound more like “bah” or “gahd” 

      3. If you cannot hear this specifically without looking at a spectrogram, you will get the opportunity to do that during the LPC order check.

      Pre-experiment instructions

      Tell the participant: “This experiment has four shorter sections and then one long section. There will be breaks between sections while I set up the next part.” 

      1. Type run_taimComp_expt into the command window and hit enter. 
      2. You will be asked if you want to do HARD or SIMPLE. Type simple and hit enter ("hard" uses a longer sentence and is trickier to navigate OSTs in) 
      3. You will be asked for participant number. It is important to use the right kind of prefix so that the trials are the right duration (for patients, they are longer/slower with more time between trials) 
        1. UW:

          1. if control, spXXX

          2. If patient, caXXX

        2. UCSF, UC-Berkeley: 

          1. Currently, the code looks for the substring ‘ca’ to identify patients. This can be changed to look for an additional condition if you have some other identifier in your own system

      4. You will then be asked about the participant’s height. This is how we determine the starting value for LPC order.

      Preparation phase 1: LPC order

      1. In this phase, participants will see words on the screen and say them out loud. 

      2. Tell the participant: “For this first section, you will see one word at a time appear on the screen. When you see the word on the screen, read it out loud, just like you would normally say it. You will be speaking into the microphone on the desk, and you will hear your own voice and some noise played back through the headphones. Do you have any questions?"

      3. The participant will complete 30 trials, 10 trials per word (bod, bead, bide). 

        1. If you have not yet determined if the speaker has monophthongization, look at the formant trajectories in “bide” as they show up on the control screen. 

        2. Examine the Audapter-tracked formants as they are coming up on the control screen. Note if the tracking seems to indicate that the LPC order should be changed. Indications that something might be off: 
          1. F2 transition from a to i in /ai/ might be extremely jumpy or jittery 
          2. F2 for /a/, especially near the /b/ transition, is questionable
          3. F2 for /i/ might jump down and up 
      4. The check_audapterLPC GUI will then come up. Use the GUI to find an appropriate LPC order for the participant. 

        1. If you still aren’t sure about the monophthongization, you can look at the formants again in this GUI.

      Pretest phase 2a: OST setting for "BUY donuts" and "GUIDE boaters" 

      1. Tell the participant: “For this section, you will see a phrase and read it out loud. Try to read it in a clear voice, putting emphasis on the capitalized word, like this: BUY donuts now. Can you say those phrases for me?”
        1. You should coach them until they say the phrase in the right way: [sound examples of good productions: buy donuts example; guide tutors example

          1. It is important to use focus (emphasis) on the capitalized word so that it is long enough without being a very unnatural speech rate. 

          2. They should NOT put pauses between words, because they will be confounding the experimental conditions (and making it difficult to automatically track the segments). It should be a smooth, slow-ish speech rate. 

          3. Although duration will not be tracked in this phase, you should try to get them to say it at a speech rate similar to what will be used in the full experiment. That way, the landmarks for the vowels will be consistent (people may have different proportions of [a] to [i] at different speech rates). 
      2. When they have gotten comfortable with saying the phrases, press the space bar to advance to the screen that gives them the general instructions. Tell them: “Okay, you can start whenever you are ready.” 

      3. They will read each phrase 9 times in random order

      4. When they finish, tell them: “I am just going to make some measurements, so you can relax for a few minutes.” 

      5. After they have finished, audapter_viewer will open. Use audapter_viewer to set the OST parameters for the participant. [See guide on Audapter’s OST capabilities or how to use audapter_viewer]

        OST setting for HARD version

        1. Status 2: onset of /wi/

        2. Status 4: onset of /b/ or /g/ 

          1. You will want the threshold for this status to be low enough that RMS fluctuations through “we” do not trigger it. 

        3. Status 6: onset of /ai/ 

          1. This is the most important status! This is the status that finds beginning of the target vowel and thus, the beginning of the perturbation

          2. This status should be rather robustly tracking the very beginning of the vowel, but if you need it to be a touch late to avoid accidental triggers at other points, that is okay. It should not be more than 50 ms late or so, however. 

        4. Status 8: start of /d/ in “guide” or “donuts” 

          1. This is also the most important status! This is the status that finds the end of the vowel and thus the end of the perturbation. 

          2. This is less finicky than “buy yogurt” because there isn’t anything in the middle that should be causing a decrease in intensity.

         

        OST setting for SIMPLE version

        1. Status 2: onset of /ai/ 

          1. This is the most important status! This is the status that finds beginning of the target vowel and thus, the beginning of the perturbation

          2. This status should be rather robustly tracking the very beginning of the vowel, but if you need it to be a touch late to avoid accidental triggers at other points, that is okay. It should not be more than 50 ms late or so, however. 

        2. Status 4: start of /d/ in “guide” or “donuts” 

          1. This is also the most important status! This is the status that finds the end of the vowel and thus the end of the perturbation. 

          2. You should try to get this status as close to the end of the vowel as possible, since /d/ usually has enough voicing such that Audapter tries to track formants through it. 

         
      6. When you are satisfied with the parameters, click “Continue and Exit”. 

        1. Click “Save and Exit” 

        2. Verify the folder you would like to save into 

      7. If you had to change anything from the default, it is HIGHLY RECOMMENDED to run the OST setting phase again to make sure that they work with new data (and thus that they can generalize to the participant’s speech) 

        1. If you have to repeat, tell the participant: “We’re just going to do that one more time so I can make sure everything is set up correctly.” 

      Segmentation

      If everything was okay, audioGUI will then pop up for you to hand-correct four landmarks on all 18 trials. [See example of how to segment: buy donuts; guide tutors --- the full phrases are slightly different than the current version, but the segmentation of /ai/ is the same.] 

      Note: the segmentation can take a while so if you are comfortable with multitasking and you have the technological means (e.g. you are in the same room as them), you can make chitchat with them while you make adjustments

      1. aiStart: beginning of vowel 

        1. Move this event to the beginning of the /ai/ in “buy” 

      2. a2iStart

        1. Move this event to when F2 starts moving up towards the second quality in /ai/ in earnest. 

      3. iPlateauStart

        1. Move this event to where F2 starts to reach the plateau (do not mark the peak—mark where the F2 trajectory starts to flatten out) 

      4. dStart

        1. Move this event to where the /d/ closure starts. This should be where formant energy reduces; some voicing will almost certainly still be there. 

      Preparation phase 2b: "BUY yogurt" or "BUY Yoshi" 

      1. Preparation phase 2b: OST check and determining when transitions occur for “buy Yoshi" ("buy Yogurt" if simple version)  

        1. The same thing will happen again: “BUY Yoshi now” (or, if hard mode, "We BUY yogurt now") 

        2. Tell the participant: “We are going to do the same thing now, but with a different phrase. Can you read these out loud for me?” 

          1. Again, coach them until they are producing them in the right way [See example wave files for reference

        3. When they have gotten comfortable with saying the phrase, press the space bar to advance to the screen that gives them the general instructions. Tell them: “Okay, you can start whenever you are ready.” 

        4. They will read the phrase 9 times 

        5. After they have finished, audapter_viewer will open. Use audapter_viewer to set the OST parameters for the participant. [See guide on Audapter’s OST capabilities or how to use audapter_viewer]

          OST setting for HARD version

          1. Status 2: onset of /wi/

          2. Status 4: onset of /b/ or /g/ 

            1. You will want the threshold for this status to be low enough that RMS fluctuations through “we” do not trigger it. 

          3. Status 6: onset of /ai/ 

            1. This is the most important status! This is the status that finds beginning of the target vowel and thus, the beginning of the perturbation

            2. This status should be rather robustly tracking the very beginning of the vowel, but if you need it to be a touch late to avoid accidental triggers at other points, that is okay. It should not be more than 50 ms late or so, however. 

          4. Status 8: start of /g/ in “yogurt"

            1. This is also the most important status! This is the status that finds the end of the vowel and thus the end of the perturbation. 

            2. This is less finicky than “buy yogurt” because there isn’t anything in the middle that should be causing a decrease in intensity.

           

          OST setting for SIMPLE version

          1. Status 2: onset of /ai/ 

            1. This is the most important status! This is the status that finds beginning of the target vowel and thus, the beginning of the perturbation

            2. This status should be rather robustly tracking the very beginning of the vowel, but if you need it to be a touch late to avoid accidental triggers at other points, that is okay. It should not be more than 50 ms late or so, however. 

          2. Status 4: start of /sh/ in “Yoshi"

            1. This is also the most important status! This is the status that finds the end of the vowel and thus the end of the perturbation. 

            2. This should be a pretty robust tracker. 

        6. When you are satisfied with the parameters, click “Continue and Exit”. 

          1. Click “Save and Exit” 

          2. Verify the folder you would like to save into 

        7. If you had to change anything from the default, it is HIGHLY RECOMMENDED to run the OST setting phase again to make sure that they work with new data (and thus that they can generalize to the participant’s speech) 

          1. If you have to repeat, tell the participant: “We’re just going to do that one more time so I can make sure everything is set up correctly.” 

      Segmentation

      If everything was okay, audioGUI will then pop up for you to hand-correct five landmarks on all nine trials. [See guide on how to use audioGUI—note: it may be easier to toggle the formant display off so you can see the spectrogram more clearly] [See example of how to segment]

      1. aiStart: beginning of vowel 

        1. Move this event to the beginning of the /ai/ in “buy” 

      2. a2iStart

        1. Move this event to when F2 starts moving up towards the second quality in /ai/ in earnest. 

      3. iPlateauStart

        1. Move this event to where F2 starts to reach the plateau (do not mark the peak—mark where the F2 trajectory starts to flatten out) 

      4. oStart

        1. Move this event to where F2 starts to flatten out again after going back down for the /o/ in yogurt

      5. shStart (simple mode) / gStart (hard mode) 

        1. Simple mode: Move this event to where the noise for esh starts. 

        2. Hard mode: move this event to where the /g/ closure starts. This should be where formant energy reduces; some voicing will almost certainly still be there. 

      Preparation phase 3: Duration training

      1. Tell the participant: 

        1. “In this section, you will practice saying the phrases at a good speed. When you say each phrase, you will get some feedback about how fast you were talking. If you see a BLUE circle, it’ll tell you to speak a little faster. If you see a YELLOW circle, it will tell you to speak a little slower. If you see a GREEN circle, that means you were speaking at a good speed.” 

        2. Pause to confirm 

        3. “So if you are told to speak a little slower or a little faster, you don’t have to really change how you are speaking drastically. Keep speaking smoothly and clearly, and just adjust a little. So like if you said [speak quickly] “we BUY donuts now” and have to slow down, you can just say [speak more slowly] “we BUY donuts now”, you don’t have to put any extra pauses in or anything.”

        4. Pause to confirm 

      2. They will do 15 trials (5 of each phrase). 

        1. Keep general track of how they do (usually too fast, usually too slow, usually good, etc.) 

        2. Keep an eye on the OSTs. The duration feedback is based on the OST values, so if they are not tracking correctly, the feedback will be off. 

      3. You will be given the option to repeat. 

        1. If you need to adjust the OSTs, you can do that, and then run again 

        2. Give general guidance on how fast to speak to the participant if necessary (referring to if they were generally fast/slow) 

      Main experiment

      1. Tell the participant: “This is the last section. It will be just like the section you just did, but will last longer, about 30 minutes. There will be breaks every 30 trials. If you need to pause at another time, like to cough or to drink water, you can press p on the keyboard. Do you have any questions?” 

      2. During the experiment: 

        1. Keep an eye on their OST tracking. You can adjust mid-experiment if necessary by pressing ‘a’

          1. You will be asked what file you want to change. Daid (hard) and DaidSimple (simple) work for "buy donuts" and "guide boaters". buyYogurt (hard) and buyYoshi (simple) work for "buy yogurt/Yoshi". 

        2. There are a few other settings that you can adjust mid-experiment by pressing 'e' 
          1. trial duration: If participants (particularly patients) are having a hard time with the speed of the trials
          2. target vowel duration boundaries: if participants (particularly patients) are having a hard time getting the right duration feedback (not due to OST issues), you can loosen the boundaries for what is considered to be a good duration. Ideally you should only increase the maximum; participants must go slow enough for them to be able to react to the perturbation
          3. LPC Order: if you notice that the formants are not tracking the way they should be, the LPC order may be off. 
          4. RMS ratio threshold: this is the parameter that limits when formants can be tracked relative to how much energy there is in the high frequencies. This is most useful for avoiding tracking formants during sibilants. The default value is 2.5, which should be pretty good. Higher values are more restrictive, lower values will allow more tracking. 
            1. NOTE: 2.5 is 1/0.4. Audapter's coding uses 0.4 for the actual OST tracking, but has it inverted for the threshold. 

      If Matlab crashes during the experiment

      To restart taimComp in the event of a crash: 

      1. Type in run_taimComp_expt and hit enter
      2. Type in simple/hard (depending on what version you have been doing) 
      3. Type in the participant code
      4. You will then be asked if you want to load in their expt file (which should exist already from the first attempt at running). Type y
      5. You will be asked if you want to OVERWRITE their expt file. Click CANCEL
      6. The script will then look for which modules have already been done. If a data file already exists in each module (LPC order, OST testing, duration training), it will let you know and ask if you want to redo that phase anyway. If there is NOT a data file in one of those modules, that means that you didn't complete that module and will automatically redo it
        1. Note: if you didn't get to segmentation in OST pretest, you should redo it anyway 
      7. If you were in the middle of the main experiment, it will start you back where you were 
        1. Note: if you didn't get to the first trial of the perturbation phase, it will start over from trial 1. 

       

      Suggested time for break

      4. Time JND

         

      Special running circumstances

      This experiment is part of the cerebellar battery run in 2022-2023. For controls, it is in the SECOND session. For patients, it is in the THIRD session.

      In this battery, participants come in for multiple sessions and do multiple experiments in a row. As such, this is a bare bones document on how to run the experiment. Procedures for consent, hearing screening, awareness surveys, general equipment set up, and payment are not included in this document. See the documents below for how these procedures are implemented in this multi-study session: 

      1. Protocol for cerebellar battery: controls
      2. For patients

      Stimuli

      This is a perceptual experiment, using prefabricated tokens. 

      • For people running this study at UW: the tokens are located on the server. 
      • For people running this study at UCSF: the tokens were shared via Google drive. They should be copied into the folder that is returned when you run fullfile(get_exptLoadPath, 'cerebDurJND', 'stimuli')

      Practice phase

      Matlab command: run_cerebDurJND_expt

      "In this experiment, you will be listening to three words, which will differ mainly in the duration of the vowel. You will then press a button on the keyboard to indicate if the second word sounded more similar to the first sound or the third sound. If you are not sure, just make your best guess."

      At this point you can answer any questions they might have (there are also task reminders in each trial). 

      "You will start first with a practice phase, where we can make sure that the volume is okay and to get you used to the task. In order to move onto the full task, you will need to get at least 5 of the 6 practice trials correct." 

      This experiment starts with a practice phase so that participants can get used to how the task is run. The practice phase uses stimuli with very large intervals, so participants should be able to hear the difference. During practice, the participant will get feedback on if their answers were right or not. They will automatically move onto the full phase of the experiment once they get at least 5/6 practice trials correct. 

      Note: this experiment assumes that 100 ms is larger than the biggest JND among cerebellar patients, based on data from a different study (for these particular stimuli, it is 100 ms vs. 200 ms, so it is a very large proportional difference). However, if you get a participant that cannot complete the practice because they cannot hear the differences, please contact Robin right away so additional stimuli can be made, and adjustments to the code installed. There will not be an infinite loop---if the participant fails practice more than 5 times, you will be able to manually override. 

      Main Phase

      Once they have passed the practice, they will go automatically to the main phase of the experiment. They will complete either 100 trials or until they get to 30 reversals, whichever is first. This will last about 10 minutes. 

      After the participant leaves

      • If running at UW: the data will be saved in C:\Users\Public\Documents\experiments\cerebDurJND\acousticData\. Copy the participant's folder into: \\wcs-cifs\wc\smng\experiments\cerebDurJND\acousticData 
      • If NOT running at UW: the data will be saved into the folder generated by get_acoustSavePath('cerebDurJND'). Copy the participant's folder into the path generated by get_acoustLoadPath('cerebDurJND')

      If Matlab crashes during the experiment

      As of 9/23/2022 there is no restart script to start the experiment over in the middle of the experiment. 

       

      Equipment setup: 

      1. Move tapping apparatus to one of the three velcro spots. Make sure that it is comfortable for the participant to reach 
      2. Plug in tapping apparatus (uses mini USB) 

      5. Coordinative tapping

      {{ANNEKE}} 

       

      After the participant leaves

      • Transfer data from all experiments to their respective folders on the server
      • Fill in lab notebook for each experiment. 

       

       Last updated RPK 10/24/2022