UnityEngine.AudioModule An implementation of IPlayable that controls an AudioClip. Creates an AudioClipPlayable in the PlayableGraph. The PlayableGraph that will contain the new AnimationLayerMixerPlayable. The AudioClip that will be added in the PlayableGraph. True if the clip should loop, false otherwise. A AudioClipPlayable linked to the PlayableGraph. AudioMixer asset. Routing target. How time should progress for this AudioMixer. Used during Snapshot transitions. Resets an exposed parameter to its initial value. Exposed parameter. Returns false if the parameter was not found or could not be set. Connected groups in the mixer form a path from the mixer's master group to the leaves. This path has the format "Master GroupChild of Master GroupGrandchild of Master Group", so to find the grandchild group in this example, a valid search string would be for instance "randchi" which would return exactly one group while "hild" or "oup/" would return 2 different groups. Sub-string of the paths to be matched. Groups in the mixer whose paths match the specified search path. The name must be an exact match. Name of snapshot object to be returned. The snapshot identified by the name. Returns the value of the exposed parameter specified. If the parameter doesn't exist the function returns false. Prior to calling SetFloat and after ClearFloat has been called on this parameter the value returned will be that of the current snapshot or snapshot transition. Name of exposed parameter. Return value of exposed parameter. Returns false if the exposed parameter specified doesn't exist. Sets the value of the exposed parameter specified. When a parameter is exposed, it is not controlled by mixer snapshots and can therefore only be changed via this function. Name of exposed parameter. New value of exposed parameter. Returns false if the exposed parameter was not found or snapshots are currently being edited. Transitions to a weighted mixture of the snapshots specified. This can be used for games that specify the game state as a continuum between states or for interpolating snapshots from a triangulated map location. The set of snapshots to be mixed. The mix weights for the snapshots specified. Relative time after which the mixture should be reached from any current state. Object representing a group in the mixer. An implementation of IPlayable that controls an audio mixer. Object representing a snapshot in the mixer. Performs an interpolated transition towards this snapshot over the time interval specified. Relative time after which this snapshot should be reached from any current state. The mode in which an AudioMixer should update its time. Update the AudioMixer with scaled game time. Update the AudioMixer with unscaled realtime. A PlayableBinding that contains information representing an AudioPlayableOutput. Creates a PlayableBinding that contains information representing an AudioPlayableOutput. A reference to a UnityEngine.Object that acts as a key for this binding. The name of the AudioPlayableOutput. Returns a PlayableBinding that contains information that is used to create an AudioPlayableOutput. A IPlayableOutput implementation that will be used to play audio. Creates an AudioPlayableOutput in the PlayableGraph. The PlayableGraph that will contain the AnimationPlayableOutput. The name of the output. The AudioSource that will play the AudioPlayableOutput source Playable. A new AudioPlayableOutput attached to the PlayableGraph. Gets the state of output playback when seeking. Returns true if the output plays when seeking. Returns false otherwise. Returns an invalid AudioPlayableOutput. Controls whether the output should play when seeking. Set to true to play the output when seeking. Set to false to disable audio scrubbing on this output. Default is true. The Audio Chorus Filter takes an Audio Clip and processes it creating a chorus effect. Chorus delay in ms. 0.1 to 100.0. Default = 40.0 ms. Chorus modulation depth. 0.0 to 1.0. Default = 0.03. Volume of original signal to pass to output. 0.0 to 1.0. Default = 0.5. Chorus feedback. Controls how much of the wet signal gets fed back into the chorus buffer. 0.0 to 1.0. Default = 0.0. Chorus modulation rate in hz. 0.0 to 20.0. Default = 0.8 hz. Volume of 1st chorus tap. 0.0 to 1.0. Default = 0.5. Volume of 2nd chorus tap. This tap is 90 degrees out of phase of the first tap. 0.0 to 1.0. Default = 0.5. Volume of 3rd chorus tap. This tap is 90 degrees out of phase of the second tap. 0.0 to 1.0. Default = 0.5. A container for audio data. Returns true if this audio clip is ambisonic (read-only). The number of channels in the audio clip. (Read Only) The sample frequency of the clip in Hertz. (Read Only) Returns true if the AudioClip is ready to play (read-only). The length of the audio clip in seconds. (Read Only) Corresponding to the "Load In Background" flag in the inspector, when this flag is set, the loading will happen delayed without blocking the main thread. Returns the current load state of the audio data associated with an AudioClip. The load type of the clip (read-only). Preloads audio data of the clip when the clip asset is loaded. When this flag is off, scripts have to call AudioClip.LoadAudioData() to load the data before the clip can be played. Properties like length, channels and format are available before the audio data has been loaded. The length of the audio clip in samples. (Read Only) Creates a user AudioClip with a name and with the given length in samples, channels and frequency. Name of clip. Number of sample frames. Number of channels per frame. Sample frequency of clip. Audio clip is played back in 3D. True if clip is streamed, that is if the pcmreadercallback generates data on the fly. This callback is invoked to generate a block of sample data. Non-streamed clips call this only once at creation time while streamed clips call this continuously. This callback is invoked whenever the clip loops or changes playback position. A reference to the created AudioClip. Creates a user AudioClip with a name and with the given length in samples, channels and frequency. Name of clip. Number of sample frames. Number of channels per frame. Sample frequency of clip. Audio clip is played back in 3D. True if clip is streamed, that is if the pcmreadercallback generates data on the fly. This callback is invoked to generate a block of sample data. Non-streamed clips call this only once at creation time while streamed clips call this continuously. This callback is invoked whenever the clip loops or changes playback position. A reference to the created AudioClip. Creates a user AudioClip with a name and with the given length in samples, channels and frequency. Name of clip. Number of sample frames. Number of channels per frame. Sample frequency of clip. Audio clip is played back in 3D. True if clip is streamed, that is if the pcmreadercallback generates data on the fly. This callback is invoked to generate a block of sample data. Non-streamed clips call this only once at creation time while streamed clips call this continuously. This callback is invoked whenever the clip loops or changes playback position. A reference to the created AudioClip. Creates a user AudioClip with a name and with the given length in samples, channels and frequency. Name of clip. Number of sample frames. Number of channels per frame. Sample frequency of clip. Audio clip is played back in 3D. True if clip is streamed, that is if the pcmreadercallback generates data on the fly. This callback is invoked to generate a block of sample data. Non-streamed clips call this only once at creation time while streamed clips call this continuously. This callback is invoked whenever the clip loops or changes playback position. A reference to the created AudioClip. Creates a user AudioClip with a name and with the given length in samples, channels and frequency. Name of clip. Number of sample frames. Number of channels per frame. Sample frequency of clip. Audio clip is played back in 3D. True if clip is streamed, that is if the pcmreadercallback generates data on the fly. This callback is invoked to generate a block of sample data. Non-streamed clips call this only once at creation time while streamed clips call this continuously. This callback is invoked whenever the clip loops or changes playback position. A reference to the created AudioClip. Creates a user AudioClip with a name and with the given length in samples, channels and frequency. Name of clip. Number of sample frames. Number of channels per frame. Sample frequency of clip. Audio clip is played back in 3D. True if clip is streamed, that is if the pcmreadercallback generates data on the fly. This callback is invoked to generate a block of sample data. Non-streamed clips call this only once at creation time while streamed clips call this continuously. This callback is invoked whenever the clip loops or changes playback position. A reference to the created AudioClip. Fills an array with sample data from the clip. Loads the audio data of a clip. Clips that have "Preload Audio Data" set will load the audio data automatically. Returns true if loading succeeded. Delegate called each time AudioClip reads data. Array of floats containing data read from the clip. Delegate called each time AudioClip changes read position. New position in the audio clip. Set sample data in a clip. Unloads the audio data associated with the clip. This works only for AudioClips that are based on actual sound file assets. Returns false if unloading failed. Determines how the audio clip is loaded in. The audio data of the clip will be kept in memory in compressed form. The audio data is decompressed when the audio clip is loaded. Streams audio data from disk. An enum containing different compression types. AAC Audio Compression. Adaptive differential pulse-code modulation. Sony proprietary hardware format. Nintendo ADPCM audio compression format. Sony proprietory hardware codec. MPEG Audio Layer III. Uncompressed pulse-code modulation. Sony proprietary hardware format. Vorbis compression format. Xbox One proprietary hardware format. Specifies the current properties or desired properties to be set for the audio system. The length of the DSP buffer in samples determining the latency of sounds by the audio output device. The current maximum number of simultaneously audible sounds in the game. The maximum number of managed sounds in the game. Beyond this limit sounds will simply stop playing. The current sample rate of the audio output device used. The current speaker mode used by the audio output device. Value describing the current load state of the audio data associated with an AudioClip. Value returned by AudioClip.loadState for an AudioClip that has failed loading its audio data. Value returned by AudioClip.loadState for an AudioClip that has succeeded loading its audio data. Value returned by AudioClip.loadState for an AudioClip that is currently loading audio data. Value returned by AudioClip.loadState for an AudioClip that has no audio data loaded and where loading has not been initiated yet. The Audio Distortion Filter distorts the sound from an AudioSource or sounds reaching the AudioListener. Distortion value. 0.0 to 1.0. Default = 0.5. The Audio Echo Filter repeats a sound after a given Delay, attenuating the repetitions based on the Decay Ratio. Echo decay per delay. 0 to 1. 1.0 = No decay, 0.0 = total decay (i.e. simple 1 line delay). Default = 0.5. Echo delay in ms. 10 to 5000. Default = 500. Volume of original signal to pass to output. 0.0 to 1.0. Default = 1.0. Volume of echo signal to pass to output. 0.0 to 1.0. Default = 1.0. The Audio High Pass Filter passes high frequencies of an AudioSource, and cuts off signals with frequencies lower than the Cutoff Frequency. Highpass cutoff frequency in hz. 10.0 to 22000.0. Default = 5000.0. Determines how much the filter's self-resonance isdampened. Representation of a listener in 3D space. The paused state of the audio system. This lets you set whether the Audio Listener should be updated in the fixed or dynamic update. Controls the game sound volume (0.0 to 1.0). Provides a block of the listener (master)'s output data. The array to populate with audio samples. Its length must be a power of 2. The channel to sample from. Deprecated Version. Returns a block of the listener (master)'s output data. Provides a block of the listener (master)'s spectrum data. The array to populate with audio samples. Its length must be a power of 2. The channel to sample from. The FFTWindow type to use when sampling. Deprecated Version. Returns a block of the listener (master)'s spectrum data. Number of values (the length of the samples array). Must be a power of 2. Min = 64. Max = 8192. The channel to sample from. The FFTWindow type to use when sampling. The Audio Low Pass Filter passes low frequencies of an AudioSource or all sounds reaching an AudioListener, while removing frequencies higher than the Cutoff Frequency. Returns or sets the current custom frequency cutoff curve. Lowpass cutoff frequency in hz. 10.0 to 22000.0. Default = 5000.0. Determines how much the filter's self-resonance is dampened. Allow recording the main output of the game or specific groups in the AudioMixer. Returns the number of samples available since the last time AudioRenderer.Render was called. This is dependent on the frame capture rate. Number of samples available since last recorded frame. Performs the recording of the main output as well as any optional mixer groups that have been registered via AudioRenderer.AddMixerGroupSink. The buffer to write the sample data to. True if the recording succeeded. Enters audio recording mode. After this Unity will output silence until AudioRenderer.Stop is called. True if the engine was switched into output recording mode. False if it is already recording. Exits audio recording mode. After this audio output will be audible again. True if the engine was recording when this function was called. The Audio Reverb Filter takes an Audio Clip and distorts it to create a custom reverb effect. Decay HF Ratio : High-frequency to low-frequency decay time ratio. Ranges from 0.1 to 2.0. Default is 0.5. Reverberation decay time at low-frequencies in seconds. Ranges from 0.1 to 20.0. Default is 1.0. Reverberation density (modal density) in percent. Ranges from 0.0 to 100.0. Default is 100.0. Reverberation diffusion (echo density) in percent. Ranges from 0.0 to 100.0. Default is 100.0. Mix level of dry signal in output in mB. Ranges from -10000.0 to 0.0. Default is 0. Reference high frequency in Hz. Ranges from 20.0 to 20000.0. Default is 5000.0. Reference low-frequency in Hz. Ranges from 20.0 to 1000.0. Default is 250.0. Late reverberation level relative to room effect in mB. Ranges from -10000.0 to 2000.0. Default is 0.0. Early reflections level relative to room effect in mB. Ranges from -10000.0 to 1000.0. Default is -10000.0. Late reverberation delay time relative to first reflection in seconds. Ranges from 0.0 to 0.1. Default is 0.04. Late reverberation level relative to room effect in mB. Ranges from -10000.0 to 2000.0. Default is 0.0. Set/Get reverb preset properties. Room effect level at low frequencies in mB. Ranges from -10000.0 to 0.0. Default is 0.0. Room effect high-frequency level re. low frequency level in mB. Ranges from -10000.0 to 0.0. Default is 0.0. Room effect low-frequency level in mB. Ranges from -10000.0 to 0.0. Default is 0.0. Reverb presets used by the Reverb Zone class and the audio reverb filter. Alley preset. Arena preset. Auditorium preset. Bathroom preset. Carpeted hallway preset. Cave preset. City preset. Concert hall preset. Dizzy preset. Drugged preset. Forest preset. Generic preset. Hallway preset. Hangar preset. Livingroom preset. Mountains preset. No reverb preset selected. Padded cell preset. Parking Lot preset. Plain preset. Psychotic preset. Quarry preset. Room preset. Sewer pipe preset. Stone corridor preset. Stoneroom preset. Underwater presset. User defined preset. Reverb Zones are used when you want to create location based ambient effects in the scene. High-frequency to mid-frequency decay time ratio. Reverberation decay time at mid frequencies. Value that controls the modal density in the late reverberation decay. Value that controls the echo density in the late reverberation decay. The distance from the centerpoint that the reverb will not have any effect. Default = 15.0. The distance from the centerpoint that the reverb will have full effect at. Default = 10.0. Early reflections level relative to room effect. Initial reflection delay time. Late reverberation level relative to room effect. Late reverberation delay time relative to initial reflection. Set/Get reverb preset properties. Room effect level (at mid frequencies). Relative room effect level at high frequencies. Relative room effect level at low frequencies. Like rolloffscale in global settings, but for reverb room size effect. Reference high frequency (hz). Reference low frequency (hz). Rolloff modes that a 3D sound can have in an audio source. Use this when you want to use a custom rolloff. Use this mode when you want to lower the volume of your sound over the distance. Use this mode when you want a real-world rolloff. Controls the global audio settings from script. Returns the speaker mode capability of the current audio driver. (Read Only) Returns the current time of the audio system. Get the mixer's current output rate. Gets the current speaker mode. Default is 2 channel stereo. A delegate called whenever the global audio settings are changed, either by AudioSettings.Reset or by an external device change such as the OS control panel changing the sample rate or because the default output device was changed, for example when plugging in an HDMI monitor or a USB headset. True if the change was caused by an device change. Returns the current configuration of the audio device and system. The values in the struct may then be modified and reapplied via AudioSettings.Reset. The new configuration to be applied. Get the mixer's buffer size in samples. Is the length of each buffer in the ringbuffer. Is number of buffers. Returns the name of the spatializer selected on the currently-running platform. The spatializer plugin name. A delegate called whenever the global audio settings are changed, either by AudioSettings.Reset or by an external factor such as the OS control panel changing the sample rate or because the default output device was changed, for example when plugging in an HDMI monitor or a USB headset. True if the change was caused by an device change. Performs a change of the device configuration. In response to this the AudioSettings.OnAudioConfigurationChanged delegate is invoked with the argument deviceWasChanged=false. It cannot be guaranteed that the exact settings specified can be used, but the an attempt is made to use the closest match supported by the system. The new configuration to be used. True if all settings could be successfully applied. A representation of audio sources in 3D. Bypass effects (Applied from filter components or global listener filters). When set global effects on the AudioListener will not be applied to the audio signal generated by the AudioSource. Does not apply if the AudioSource is playing into a mixer group. When set doesn't route the signal from an AudioSource into the global reverb associated with reverb zones. The default AudioClip to play. Sets the Doppler scale for this AudioSource. Allows AudioSource to play even though AudioListener.pause is set to true. This is useful for the menu element sounds or background music in pause menus. This makes the audio source not take into account the volume of the audio listener. Is the clip playing right now (Read Only)? True if all sounds played by the AudioSource (main sound started by Play() or playOnAwake as well as one-shots) are culled by the audio system. Is the audio clip looping? (Logarithmic rolloff) MaxDistance is the distance a sound stops attenuating at. Within the Min distance the AudioSource will cease to grow louder in volume. Un- / Mutes the AudioSource. Mute sets the volume=0, Un-Mute restore the original volume. The target group to which the AudioSource should route its signal. Pans a playing sound in a stereo way (left or right). This only applies to sounds that are Mono or Stereo. The pitch of the audio source. If set to true, the audio source will automatically start playing on awake. Sets the priority of the AudioSource. The amount by which the signal from the AudioSource will be mixed into the global reverb associated with the Reverb Zones. Sets/Gets how the AudioSource attenuates over distance. Sets how much this AudioSource is affected by 3D spatialisation calculations (attenuation, doppler etc). 0.0 makes the sound full 2D, 1.0 makes it full 3D. Enables or disables spatialization. Determines if the spatializer effect is inserted before or after the effect filters. Sets the spread angle (in degrees) of a 3d stereo or multichannel sound in speaker space. Playback position in seconds. Playback position in PCM samples. Whether the Audio Source should be updated in the fixed or dynamic update. The volume of the audio source (0.0 to 1.0). Reads a user-defined parameter of a custom ambisonic decoder effect that is attached to an AudioSource. Zero-based index of user-defined parameter to be read. Return value of the user-defined parameter that is read. True, if the parameter could be read. Get the current custom curve for the given AudioSourceCurveType. The curve type to get. The custom AnimationCurve corresponding to the given curve type. Provides a block of the currently playing source's output data. The array to populate with audio samples. Its length must be a power of 2. The channel to sample from. Deprecated Version. Returns a block of the currently playing source's output data. Reads a user-defined parameter of a custom spatializer effect that is attached to an AudioSource. Zero-based index of user-defined parameter to be read. Return value of the user-defined parameter that is read. True, if the parameter could be read. Provides a block of the currently playing audio source's spectrum data. The array to populate with audio samples. Its length must be a power of 2. The channel to sample from. The FFTWindow type to use when sampling. Deprecated Version. Returns a block of the currently playing source's spectrum data. The number of samples to retrieve. Must be a power of 2. The channel to sample from. The FFTWindow type to use when sampling. Pauses playing the clip. Plays the clip with an optional certain delay. Delay in number of samples, assuming a 44100Hz sample rate (meaning that Play(44100) will delay the playing by exactly 1 sec). Plays the clip with an optional certain delay. Delay in number of samples, assuming a 44100Hz sample rate (meaning that Play(44100) will delay the playing by exactly 1 sec). Plays an AudioClip at a given position in world space. Audio data to play. Position in world space from which sound originates. Playback volume. Plays an AudioClip at a given position in world space. Audio data to play. Position in world space from which sound originates. Playback volume. Plays the clip with a delay specified in seconds. Users are advised to use this function instead of the old Play(delay) function that took a delay specified in samples relative to a reference rate of 44.1 kHz as an argument. Delay time specified in seconds. Plays an AudioClip, and scales the AudioSource volume by volumeScale. The clip being played. The scale of the volume (0-1). Plays an AudioClip, and scales the AudioSource volume by volumeScale. The clip being played. The scale of the volume (0-1). Plays the clip at a specific time on the absolute time-line that AudioSettings.dspTime reads from. Time in seconds on the absolute time-line that AudioSettings.dspTime refers to for when the sound should start playing. Sets a user-defined parameter of a custom ambisonic decoder effect that is attached to an AudioSource. Zero-based index of user-defined parameter to be set. New value of the user-defined parameter. True, if the parameter could be set. Set the custom curve for the given AudioSourceCurveType. The curve type that should be set. The curve that should be applied to the given curve type. Changes the time at which a sound that has already been scheduled to play will end. Notice that depending on the timing not all rescheduling requests can be fulfilled. Time in seconds. Changes the time at which a sound that has already been scheduled to play will start. Time in seconds. Sets a user-defined parameter of a custom spatializer effect that is attached to an AudioSource. Zero-based index of user-defined parameter to be set. New value of the user-defined parameter. True, if the parameter could be set. Stops playing the clip. Unpause the paused playback of this AudioSource. This defines the curve type of the different custom curves that can be queried and set within the AudioSource. Custom Volume Rolloff. Reverb Zone Mix. The Spatial Blend. The 3D Spread. These are speaker types defined for use with AudioSettings.speakerMode. Channel count is set to 6. 5.1 speaker setup. This includes front left, front right, center, rear left, rear right and a subwoofer. Channel count is set to 8. 7.1 speaker setup. This includes front left, front right, center, rear left, rear right, side left, side right and a subwoofer. Channel count is set to 1. The speakers are monaural. Channel count is set to 2. Stereo output, but data is encoded in a way that is picked up by a Prologic/Prologic2 decoder and split into a 5.1 speaker setup. Channel count is set to 4. 4 speaker setup. This includes front left, front right, rear left, rear right. Channel count is unaffected. Channel count is set to 2. The speakers are stereo. This is the editor default. Channel count is set to 5. 5 speaker setup. This includes front left, front right, center, rear left, rear right. Describes when an AudioSource or AudioListener is updated. Updates the source or listener in the fixed update loop if it is attached to a Rigidbody, dynamic otherwise. Updates the source or listener in the dynamic update loop. Updates the source or listener in the fixed update loop. Provides access to the audio samples generated by Unity objects such as VideoPlayer. Number of sample frames available for consuming with Experimental.Audio.AudioSampleProvider.ConsumeSampleFrames. The number of audio channels per sample frame. Pointer to the native function that provides access to audio sample frames. Enables the Experimental.Audio.AudioSampleProvider.sampleFramesAvailable events. If true, buffers produced by ConsumeSampleFrames will get padded when silence if there are less available than asked for. Otherwise, the extra sample frames in the buffer will be left unchanged. Number of sample frames that can still be written to by the sample producer before overflowing. Then the free sample count falls below this threshold, the Experimental.Audio.AudioSampleProvider.sampleFramesAvailable event and associated native is emitted. Unique identifier for this instance. The maximum number of sample frames that can be accumulated inside the internal buffer before an overflow event is emitted. Object where this provider came from. Invoked when the number of available sample frames goes beyond the threshold set with Experimental.Audio.AudioSampleProvider.freeSampleFrameCountLowThreshold. Number of available sample frames. Invoked when the number of available sample frames goes beyond the maximum that fits in the internal buffer. The number of sample frames that were dropped due to the overflow. The expected playback rate for the sample frames produced by this class. Index of the track in the object that created this provider. True if the object is valid. Clear the native handler set with Experimental.Audio.AudioSampleProvider.SetSampleFramesAvailableNativeHandler. Clear the native handler set with Experimental.Audio.AudioSampleProvider.SetSampleFramesOverflowNativeHandler. Consume sample frames from the internal buffer. Buffer where the consumed samples will be transferred. How many sample frames were written into the buffer passed in. Type that represents the native function pointer for consuming sample frames. Id of the provider. See Experimental.Audio.AudioSampleProvider.id. Pointer to the sample frames buffer to fill. The actual C type is float*. Number of sample frames that can be written into interleavedSampleFrames. Release internal resources. Inherited from IDisposable. Type that represents the native function pointer for handling sample frame events. User data specified when the handler was set. The actual C type is void*. Id of the provider. See Experimental.Audio.AudioSampleProvider.id. Number of sample frames available or overflowed, depending on event type. Delegate for sample frame events. Provider emitting the event. How many sample frames are available, or were dropped, depending on the event. Set the native event handler for events emitted when the number of available sample frames crosses the threshold. Pointer to the function to invoke when the event is emitted. User data to be passed to the handler when invoked. The actual C type is void*. Set the native event handler for events emitted when the internal sample frame buffer overflows. Pointer to the function to invoke when the event is emitted. User data to be passed to the handler when invoked. The actual C type is void*. Spectrum analysis windowing types. W[n] = 0.42 - (0.5 * COS(nN) ) + (0.08 * COS(2.0 * nN) ). W[n] = 0.35875 - (0.48829 * COS(1.0 * nN)) + (0.14128 * COS(2.0 * nN)) - (0.01168 * COS(3.0 * n/N)). W[n] = 0.54 - (0.46 * COS(n/N) ). W[n] = 0.5 * (1.0 - COS(n/N) ). W[n] = 1.0. W[n] = TRI(2n/N). Use this class to record to an AudioClip using a connected microphone. A list of available microphone devices, identified by name. Stops recording. The name of the device. Get the frequency capabilities of a device. The name of the device. Returns the minimum sampling frequency of the device. Returns the maximum sampling frequency of the device. Get the position in samples of the recording. The name of the device. Query if a device is currently recording. The name of the device. Start Recording with device. The name of the device. Indicates whether the recording should continue recording if lengthSec is reached, and wrap around and record from the beginning of the AudioClip. Is the length of the AudioClip produced by the recording. The sample rate of the AudioClip produced by the recording. The function returns null if the recording fails to start. MovieTexture has been deprecated. Refer to the new movie playback solution VideoPlayer. MovieTexture has been deprecated. Refer to the new movie playback solution VideoPlayer. MovieTexture has been deprecated. Refer to the new movie playback solution VideoPlayer. MovieTexture has been deprecated. Refer to the new movie playback solution VideoPlayer. MovieTexture has been deprecated. Refer to the new movie playback solution VideoPlayer. MovieTexture has been deprecated. Refer to the new movie playback solution VideoPlayer. MovieTexture has been deprecated. Refer to the new movie playback solution VideoPlayer. MovieTexture has been deprecated. Refer to the new movie playback solution VideoPlayer. MovieTexture has been deprecated. Refer to the new movie playback solution VideoPlayer. The Audio module implements Unity's audio system. A structure describing the webcam device. True if camera faces the same direction a screen does, false otherwise. A human-readable name of the device. Varies across different systems. WebCam Textures are textures onto which the live video input is rendered. Set this to specify the name of the device to use. Return a list of available devices. Did the video buffer update this frame? Returns if the camera is currently playing. Set the requested frame rate of the camera device (in frames per second). Set the requested height of the camera device. Set the requested width of the camera device. Returns an clockwise angle (in degrees), which can be used to rotate a polygon so camera contents are shown in correct orientation. Returns if the texture image is vertically flipped. Create a WebCamTexture. The name of the video input device to be used. The requested width of the texture. The requested height of the texture. The requested frame rate of the texture. Create a WebCamTexture. The name of the video input device to be used. The requested width of the texture. The requested height of the texture. The requested frame rate of the texture. Create a WebCamTexture. The name of the video input device to be used. The requested width of the texture. The requested height of the texture. The requested frame rate of the texture. Create a WebCamTexture. The name of the video input device to be used. The requested width of the texture. The requested height of the texture. The requested frame rate of the texture. Create a WebCamTexture. The name of the video input device to be used. The requested width of the texture. The requested height of the texture. The requested frame rate of the texture. Create a WebCamTexture. The name of the video input device to be used. The requested width of the texture. The requested height of the texture. The requested frame rate of the texture. Returns pixel color at coordinates (x, y). Get a block of pixel colors. Get a block of pixel colors. Returns the pixels data in raw format. Optional array to receive pixel data. Returns the pixels data in raw format. Optional array to receive pixel data. Pauses the camera. Starts the camera. Stops the camera.