unreal.NeuralMorphModel

class unreal.NeuralMorphModel(outer: Object | None = None, name: Name | str = 'None')

Bases: MLDeformerMorphModel

The neural morph model. This generates a set of highly compressed morph targets to approximate a target deformation based on bone rotations and/or curve inputs. During inference, the neural network inside this model runs on the CPU and outputs the morph target weights for the morph targets it generated during training. Groups of bones and curves can also be defined. Those groups will generate a set of morph targets together as well. This can help when shapes depend on multiple inputs. An external morph target set is generated during training and serialized inside the ML Deformer asset that contains this model. When the ML Deformer component initializes the morph target set is registered. Most of the heavy lifting is done by the UMLDeformerMorphModel class. The neural morph model has two modes: local and global. See the ENeuralMorphMode for a description of the two. see: ENeuralMorphMode

C++ Source:

  • Plugin: NeuralMorphModel

  • Module: NeuralMorphModel

  • File: NeuralMorphModel.h

Editor Properties: (see get_editor_property/set_editor_property)

  • alignment_transform (Transform): [Read-Write] The transform that aligns the Geometry Cache to the SkeletalMesh. This will mostly apply some scale and a rotation, but no translation.

  • anim_sequence (AnimSequence): [Read-Write] The animation sequence to apply to the base mesh. This has to match the animation of the target mesh’s geometry cache. Internally we force the Interpolation property for this motion to be “Step”. deprecated: Use the training input anims instead.

  • batch_size (int32): [Read-Write] The number of frames per batch when training the model.

  • clamp_morph_weights (bool): [Read-Write] Should we enable morph target weight clamping? The minimum and maximum values that it will be clamped against will be the min/max morph target weight values that have been seen when running the training dataset through the network. The advantage of clamping is that it can make deformations more stable when we have input poses that have not been seen during training. We basically prevent the weights from ‘exploding’ and getting very large values, which could make the mesh look very bad.

  • delta_cutoff_length (float): [Read-Write] Sometimes there can be some vertices that cause some issues that cause deltas to be very long. We can ignore these deltas by setting a cutoff value. Deltas that are longer than the cutoff value (in units), will be ignored and set to zero length.

  • enable_bone_masks (bool): [Read-Write] Enable the use of per bone and bone group masks. When enabled, an influence mask is generated per bone based on skinning info. This will enforce deformations localized to the area around the joint. The benefit of enabling this can be reduced GPU memory footprint, faster GPU performance and more localized deformations. If deformations do not happen near the joint then this enabling this setting can lead to those deformations possibly not being captured.

  • geometry_cache (GeometryCache): [Read-Write] The geometry cache that represents the target deformations. deprecated: Use the training input anims instead.

  • global_num_hidden_layers (int32): [Read-Write] The number of hidden layers that the neural network model will have.nHigher numbers will slow down performance but can deal with more complex deformations.

  • global_num_morph_targets (int32): [Read-Write] The number of morph targets to generate in total. Higher numbers result in better approximation of the target deformation, but also result in a higher memory footprint and slower performance.

  • global_num_neurons_per_layer (int32): [Read-Write] The number of units/neurons per hidden layer. Higher numbers will slow down performance but allow for more complex mesh deformations.

  • include_normals (bool): [Read-Write] Include vertex normals in the morph targets? The advantage of this can be that it is higher performance than recomputing the normals. The disadvantage is it can result in lower quality and uses more memory for the stored morph targets.

  • invert_mask_channel (bool): [Read-Write] Enable this if you want to invert the mask channel values. For example if you painted the neck seam vertices in red, and you wish the vertices that got painted to NOT move, you have to invert the mask. On default you paint areas where the deformer should be active. If you enable the invert option, you paint areas where the deformer will not be active.

  • learning_rate (float): [Read-Write] The learning rate used during the model training.

  • local_num_hidden_layers (int32): [Read-Write] The number of hidden layers that the neural network model will have. Higher numbers will slow down performance but can deal with more complex deformations. For the local model you most likely want to stick with a value of one or two.

  • local_num_morph_targets_per_bone (int32): [Read-Write] The number of morph targets to generate per bone, curve or group. Higher numbers result in better approximation of the target deformation, but also result in a higher memory footprint and slower performance.

  • local_num_neurons_per_layer (int32): [Read-Write] The number of units/neurons per hidden layer. Higher numbers will slow down performance but allow for more complex mesh deformations. For the local mode you probably want to keep this around the same value as the number of morph targets per bone.

  • mask_channel (MLDeformerMaskChannel): [Read-Write] The channel data that represents the delta mask multipliers. You can use this feather out influence of the ML Deformer in specific areas, such as neck line seams, where the head mesh connects with the body. The painted vertex color values will be like a weight multiplier on the ML deformer deltas applied to that vertex. You can invert the mask as well.

  • max_num_lo_ds (int32): [Read-Write] How many Skeletal Mesh LOD levels should we generate MLD lods for at most? Some examples: A value of 1 means we only store one LOD, which means LOD0. A value of 2 means we support this ML Deformer on LOD0 and LOD1. A value of 3 means we support this ML Deformer on LOD0 and LOD1 and LOD2. We never generate more LOD levels for the ML Deformer than number of LOD levels in the Skeletal Mesh, so if this value is set to 100, while the Skeletal Mesh has only 4 LOD levels, we will only generate and store 4 ML Deformer LODs. The default value of 1 means we do not support this ML Deformer at LOD levels other than LOD0. When cooking, the console variable “sg.MLDeformer.MaxLODLevelsOnCook” can be used to set the maximum value per device or platform.

  • max_training_frames (int32): [Read-Write] The maximum numer of training frames (samples) to train on. Use this to train on a sub-section of your full training data.

  • mode (NeuralMorphMode): [Read-Write] The mode that the neural network will operate in. Local mode means there is one tiny network per bone, while global mode has one network for all bones together. The advantage of local mode is that it has higher performance, while global mode might result in better deformations.

  • morph_compression_level (float): [Read-Write] The morph target compression level. Higher values result in larger compression, but could result in visual artifacts. Most of the times this is a value between 20 and 200.

  • morph_delta_zero_threshold (float): [Read-Write] Morph target delta values that are smaller than or equal to this threshold will be zeroed out. This essentially removes small deltas from morph targets, which will lower the memory usage at runtime, however when set too high it can also introduce visual artifacts. A value of 0 will result in the highest quality morph targets, at the cost of higher runtime memory usage.

  • num_iterations (int32): [Read-Write] The number of iterations to train the model for. If you are quickly iterating then around 1000 to 3000 iterations should be enough. If you want to generate final assets you might want to use a higher number of iterations, like 10k to 100k. Once the loss doesn’t go down anymore, you know that more iterations most likely won’t help much.

  • regularization_factor (float): [Read-Write] The regularization factor. Higher values can help generate more sparse morph targets, but can also lead to visual artifacts. A value of 0 disables the regularization, and gives the highest quality, at the cost of higher runtime memory usage.

  • skeletal_mesh (SkeletalMesh): [Read-Write] The skeletal mesh that represents the linear skinned mesh.

  • smooth_loss_beta (float): [Read-Write] The beta parameter in the smooth L1 loss function, which describes below which absolute error to use a squared term. If the error is above or equal to this beta value, it will use the L1 loss. This is a non-negative value, where 0 makes it behave exactly the same as an L1 loss. The value entered represents how many cm of error to allow before an L1 loss is used. Typically higher values give smoother results. If you see some noise in the trained results, even with large amount of samples and iterations, try increasing this value.

  • training_input_anims (Array[MLDeformerGeomCacheTrainingInputAnim]): [Read-Write]

property batch_size: int

[Read-Write] The number of frames per batch when training the model.

Type:

(int32)

property enable_bone_masks: bool

[Read-Write] Enable the use of per bone and bone group masks. When enabled, an influence mask is generated per bone based on skinning info. This will enforce deformations localized to the area around the joint. The benefit of enabling this can be reduced GPU memory footprint, faster GPU performance and more localized deformations. If deformations do not happen near the joint then this enabling this setting can lead to those deformations possibly not being captured.

Type:

(bool)

property global_num_hidden_layers: int

[Read-Write] The number of hidden layers that the neural network model will have.nHigher numbers will slow down performance but can deal with more complex deformations.

Type:

(int32)

property global_num_morph_targets: int

[Read-Write] The number of morph targets to generate in total. Higher numbers result in better approximation of the target deformation, but also result in a higher memory footprint and slower performance.

Type:

(int32)

property global_num_neurons_per_layer: int

[Read-Write] The number of units/neurons per hidden layer. Higher numbers will slow down performance but allow for more complex mesh deformations.

Type:

(int32)

property learning_rate: float

[Read-Write] The learning rate used during the model training.

Type:

(float)

property local_num_hidden_layers: int

[Read-Write] The number of hidden layers that the neural network model will have. Higher numbers will slow down performance but can deal with more complex deformations. For the local model you most likely want to stick with a value of one or two.

Type:

(int32)

property local_num_morph_targets_per_bone: int

[Read-Write] The number of morph targets to generate per bone, curve or group. Higher numbers result in better approximation of the target deformation, but also result in a higher memory footprint and slower performance.

Type:

(int32)

property local_num_neurons_per_layer: int

[Read-Write] The number of units/neurons per hidden layer. Higher numbers will slow down performance but allow for more complex mesh deformations. For the local mode you probably want to keep this around the same value as the number of morph targets per bone.

Type:

(int32)

property mode: NeuralMorphMode

[Read-Write] The mode that the neural network will operate in. Local mode means there is one tiny network per bone, while global mode has one network for all bones together. The advantage of local mode is that it has higher performance, while global mode might result in better deformations.

Type:

(NeuralMorphMode)

property num_iterations: int

[Read-Write] The number of iterations to train the model for. If you are quickly iterating then around 1000 to 3000 iterations should be enough. If you want to generate final assets you might want to use a higher number of iterations, like 10k to 100k. Once the loss doesn’t go down anymore, you know that more iterations most likely won’t help much.

Type:

(int32)

property regularization_factor: float

[Read-Write] The regularization factor. Higher values can help generate more sparse morph targets, but can also lead to visual artifacts. A value of 0 disables the regularization, and gives the highest quality, at the cost of higher runtime memory usage.

Type:

(float)

property smooth_loss_beta: float

[Read-Write] The beta parameter in the smooth L1 loss function, which describes below which absolute error to use a squared term. If the error is above or equal to this beta value, it will use the L1 loss. This is a non-negative value, where 0 makes it behave exactly the same as an L1 loss. The value entered represents how many cm of error to allow before an L1 loss is used. Typically higher values give smoother results. If you see some noise in the trained results, even with large amount of samples and iterations, try increasing this value.

Type:

(float)