-
Notifications
You must be signed in to change notification settings - Fork 4
Description
Hi,
I want to get the smpl_pose so that I can export fbx by EDGE. Actually, the smpl_pose is the global_orient+body_pose, and I have noticed that we can obtain this data from the joint2smpl module.
Here is the detail:
path: .visualize/simplify_loc2rot.py
def joint2smpl(self, input_joints, init_params=None):
_smplify = self.smplify # if init_params is None else self.smplify_fast
pred_pose = torch.zeros(self.batch_size, 72).to(self.device)
pred_betas = torch.zeros(self.batch_size, 10).to(self.device)
pred_cam_t = torch.zeros(self.batch_size, 3).to(self.device)
keypoints_3d = torch.zeros(self.batch_size, self.num_joints, 3).to(self.device)
# run the whole seqs
num_seqs = input_joints.shape[0]
# joints3d = input_joints[idx] # *1.2 #scale problem [check first]
keypoints_3d = torch.Tensor(input_joints).to(self.device).float()
# if idx == 0:
if init_params is None:
pred_betas = self.init_mean_shape
pred_pose = self.init_mean_pose
pred_cam_t = self.cam_trans_zero
else:
pred_betas = init_params['betas']
pred_pose = init_params['pose']
pred_cam_t = init_params['cam']
if self.joint_category == "AMASS":
confidence_input = torch.ones(self.num_joints)
# make sure the foot and ankle
if self.fix_foot == True:
confidence_input[7] = 1.5
confidence_input[8] = 1.5
confidence_input[10] = 1.5
confidence_input[11] = 1.5
else:
print("Such category not settle down!")
new_opt_vertices, new_opt_joints, new_opt_pose, new_opt_betas, \
new_opt_cam_t, new_opt_joint_loss = _smplify(
pred_pose.detach(),
pred_betas.detach(),
pred_cam_t.detach(),
keypoints_3d,
conf_3d=confidence_input.to(self.device),
# seq_ind=idx
)
thetas = new_opt_pose.reshape(self.batch_size, 24, 3)
thetas = geometry.matrix_to_rotation_6d(geometry.axis_angle_to_matrix(thetas)) # [bs, 24, 6]
root_loc = torch.tensor(keypoints_3d[:, 0]) # [bs, 3]
root_loc = torch.cat([root_loc, torch.zeros_like(root_loc)], dim=-1).unsqueeze(1) # [bs, 1, 6]
thetas = torch.cat([thetas, root_loc], dim=1).unsqueeze(0).permute(0, 2, 3, 1) # [1, 25, 6, 196]
return thetas.clone().detach(), {'pose': new_opt_joints[0, :24].flatten().clone().detach(), 'betas': new_opt_betas.clone().detach(), 'cam': new_opt_cam_t.clone().detach()}
The variable new_opt_pose is the smpl_pose.
I tried to smpl render by `./visualize/render_mesh.py`.
I modified the script and the `quickstart.ipynb` to ensure the code works.
Here is the modified `./visualize/render_mesh.py`:
import argparse
import os
from visualize import vis_utils
import shutil
from tqdm import tqdm
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument("--input_path", type=str, required=True, help='stick figure mp4 file to be rendered.')
parser.add_argument("--cuda", type=bool, default=True, help='')
parser.add_argument("--device", type=int, default=0, help='')
params = parser.parse_args()
sample_i, rep_i = 0, 0
npy_path = os.path.join(params.input_path, 'results.npy')
npy2obj = vis_utils.npy2obj(npy_path, sample_i, rep_i,
device=params.device, cuda=params.cuda)
There are some other dependencies, you can easily get them from PriorMDM(just copy some directories)
I added the following at the end of quickstart.ipynb:
xyz = pred_xyz.reshape(1, 22, 3,-1).detach().cpu().numpy()
all_lengths = xyz.shape[-1]
np.save('./myoutput/results.npy',
{'motion': xyz, 'text': clip_text, 'lengths': [all_lengths],
'num_samples': 1, 'num_repetitions': 1,})
Finally, I executed the following command:
python -m visualize.render_mesh --input_path ./myoutput
and I saved the new_opt_pose. I can convert it to a fbx file. But I found the motion is abnormal. I also tried other Algorithems. Such as stmc, Lodge, and PriorMDM. Stmc and Lodge are normal, which can export the fbx file with correct motion.
Why the smpl_pose exported by SATO or PriorMDM is abnormal?