Research Article  Open Access
Jin Tang, Jin Liu, JianQin Yin, "A Hierarchical StaticDynamic EncoderDecoder Structure for 3D Human Motion Prediction with Residual CNNs", Mathematical Problems in Engineering, vol. 2020, Article ID 7064910, 12 pages, 2020. https://doi.org/10.1155/2020/7064910
A Hierarchical StaticDynamic EncoderDecoder Structure for 3D Human Motion Prediction with Residual CNNs
Abstract
Human motion prediction aims at predicting the future poses according to the motion dynamics given by the sequence of history poses. We present a new hierarchical staticdynamic encoderdecoder structure to predict the human motion with residual CNNs. Specifically, to better mine the law of the motion, a new residual CNNbased structure, vCMU, is proposed to encode not only the static information but also the dynamic information. Based on vCMU, a hierarchical structure is proposed to model different correlations between the different given poses and the predicted pose. Moreover, a new loss function combining the static and dynamic information is introduced in the decoder to guide the prediction of the future poses. Our framework features twofolds: (1) more effective dynamics mined due to the fusion of information of the poses and the dynamic information between poses and the hierarchical structure; (2) better decoding or prediction performance, thanks to the midlevel supervision introduced by the new loss function considering both the static and dynamic losses. Extensive experiments show that our algorithm can achieve stateoftheart performance on the challenging G3D and FNTU datasets. The code is available at https://github.com/liujin0/SDnet.
1. Introduction
3D human motion prediction can be regarded as a problem to predict the future poses according to the spatial correlation and the temporal dynamics mined from the observed poses. The traditional methods are based on the encoderdecoder framework. The encoder is used to mine the motion dynamics, which is used by the decoder to generate the future poses. Obviously, the motion dynamics modeling is the key to predict poses.
To better mine the human dynamics, we first analyze the characteristics of the human motion. The motion often includes the relatively static and moving dynamic parts, for example, for the action sequence of “eating,” the hand joint may have large movement and other joints may relatively be steady. At the same time, as stated in [1], the human vision separately models the relatively static and dynamic information. However, most existing methods only use residual connection to introduce the dynamic information, which is a strongly coupled static and dynamic information modeling method. This motivates us to present a new scheme to explicitly predict the static and dynamic poses in a relatively weak coupled way.
For dynamics modeling, recurrent neural networks (RNNs) are usually used [2, 3]. However, it is known that RNNbased methods cannot well model the longterm dynamics, which is important to predict the human motion. Besides, as argued in [4], RNN structures have some other problems, for example, error accumulation and discontinuity results. Consequently, several works have proposed to model human motion by CNNs [5]. In this paper, we also model the static and dynamic information based on CNNs and we present a new hierarchical CNN based encoderdecoder structure, staticdynamic network (SDNet), to predict the future poses, leading to augmented performance than previous ones, as illustrated in Figure 1.
When encoding the observed motion dynamics, a new structure, vCMU (velocitycascade multiplicative unit) is presented, and not only the history poses but also the movement information of poses from the consecutive frames are modeled. After encoding, a hierarchical asymmetric network is employed to model the spatiotemporal information and highlight the dynamics of the historical human poses by using blocked vCMUs. To keep the features of the last pose, the hierarchical asymmetric structure network models the different contributions of the previously given frames by controlling the number of passing vCMUs. When decoding the encoded motion dynamics, we introduce a midlevel supervised signal to guide the modeling of the dynamic information. Using the midlevel supervision, on one hand, we can decode the dynamic and static information of the motion in a relatively weak manner; on the other hand, we can introduce different dynamic information for different given poses.
In summary, our contributions are as follows:(1)We propose a new hierarchical asymmetric structure, SDNet, to predict the future poses. Different from the existed models, SDNet features three branches: SDNet can explicitly model the dynamic motion information in the encoder; SDNet decodes the learned dynamics into the future poses by modeling the static and dynamic information separately; SDNet models the different correlations between different temporal frames and predicted frames using a hierarchical structure.(2)A new structure, vCMU, is presented to explicitly model the dynamic motion information. The new proposed vCMU inputs not only the consecutive frames but also the temporal difference between the consecutive frames; therefore, the vCMU can explicitly model the dynamic motion information.(3)A midlevel supervised signal is constructed to guide explicitly modeling the dynamic information of the motion. This midlevel supervision makes our framework to separately model the static and dynamic information and introduce different dynamic information for different predicted frames.
The remainder of the paper is organized as follows. Section 2 investigates the related work. Section 3 discusses our model in detail. The datasets, evaluation criteria, experimentalbased comparisons of different methods, and ablation studies are presented in Section 4. Finally, conclusions and future work directions are stated in Section 5.
2. Related Work
Many recurrent neural networks have been designed for predictive learning of spatiotemporal data based on modeling historical frames. Fragkiadaki et al. [6] proposed the encoderrecurrentdecoder (ERD) model for recognition and prediction of human body pose in videos and motion capture, which extended previous long shortterm memory (LSTM) models. Martinez et al. [2] further extended this scheme by modeling the velocity of joints instead of directly estimating the body pose and employed a single linear layer for pose feature encoding and hidden state decoding. Tang et al. [7] proposed a new model based on RNN to predict longterm human motions by exploring motion context and enhancing motion dynamics. For human motion prediction and synthesis, Gopalakrishnan et al. [8] introduced the VTLNRNN architecture, which used motion derivative features as well as novel multiobjective loss function.
As discussed in [9], conventional recurrent neural networks, such as LSTM [10] and GRU [11], are employed to model motion contexts, which inherently have difficulties in capturing longterm dependencies. Convolutional neural networks (CNNs) have been introduced to solve the problem of motion prediction [12, 13]. Li et al. [14] proposed a convolutional sequencetosequence model for human motion prediction based on CNN, which adopted diverse types of convolutional encoders to use both distant and nearby temporal motion information. Van den Oord et al. [15] presented the WaveNet which is built upon causal convolution structures, and this structure could also be used in motion prediction. Liu et al. [16, 17] proposed the SSnet to model the motion dynamics by sliding window over the temporal axis based on dilated convolutional network. Differently, it focused on the observed part of the ongoing action in the untrimmed radios which can include multiple actions. Kalchbrenner et al. [18] proposed the multiplicative unit (MU) which is a nonrecurrent convolutional structure whose neuron connectivity is quite like LSTMs and proposed a residual multiplicative block (RMB) to ease gradient propagation. Xu et al. [19] introduced an entirely CNNbased architecture, PredCNN, to model the spatial information of each frame and capture the temporal evolution of previous frames hierarchically by cascade multiplicative unit (CMU) that receives two consecutive frames as input. Liu et al. [20] applied the PredCNN with the new skeletal representation to get more accurate motion prediction.
3. Method
3.1. Framework of the Model
We propose a novel and endtoend model for human pose prediction. Our model explicitly captures the temporal dependencies between adjacent frames and predicts all future frames only in one step that can avoid error accumulation.
The framework of the model consists of three parts as shown in Figure 1. Input model: the input model is used to model the spatial information of each skeleton. As known, the skeleton sequence is a set of joint coordinates. We transform the skeletons’ joint coordinates to a pseudo image. Then, the encoders are employed to extract the spatial features of the observed skeletons. Dynamic model: the dynamic model is a vCMU hierarchical asymmetric network, which is to model the spatiotemporal information and highlight the dynamics of the historical human poses. On the one hand, we propose vCMU to explicitly model the difference between two consecutive input frames, which is beneficial to modeling dynamic evolution. On the other hand, the hierarchical asymmetric structure keeps the features of the last pose by passing less vCMUs. This architecture is enlightened by the key idea that repeating the last body pose gave a relatively small error in the measurement of the Euclidean distance between the groundtruth [2, 7]. Output model: the output model is the static and dynamic integrated residual module in which the static and dynamic information is integrated from two branches. On one branch of this module, the latest frame is used to retain the static information. On the other branch, the decoder is expected to predict the dynamic information, i.e., velocity. Finally, the future poses are predicted by integrating the static and dynamic information at the finegrained level.
3.2. Input Model
We select and reorder the 18 joints which are informative enough to represent human motion as shown in Figure 2. Given the skeleton of a person in frame , the input frame is the observed skeletons . is an input of size , where corresponds the number of main joints in each frame, is 18, and corresponds the number of dimensions describing each joint (if the describing is only the coordinate, is 3). can be formulated as follows:where each row vector of the matrix is 3D joint coordinate. Then, the input is transformed into the pseudo image.
Because the residual multiplicative block (RMB) [18] has power for spatial modeling with its LSTMlike structure, the RMB is the basic unit and is cascaded by layers as the encoder, where is a hyperparameter. After encoding, each pseudo image is mapped into a feature space to describe spatial information.
3.3. Dynamic Model
3.3.1. vCMU
As we have mentioned before, motion dynamics modeling is the key to predict poses. In this paper, we design a novel vCMU based on a new residual CNN to explicitly capture the dynamic motion information between the adjacent frames. Our key motivations of vCMU are twofolds.
Most existing CNN methods only encode the static information. To encode the static and dynamic information in a relatively weak coupled way, we design a dynamic vCMU unit based on CMU. The motions of three directions are introduced by calculating the difference between adjacent frames elementwise to focus on predicting dynamic evolution.
For dynamics modeling, RNNs are usually used to capture the underlying temporal dependencies in the sequential data. However, to explicitly capture the temporal dependencies, the new proposed vCMU is formulated like a learning residual function to concern the temporal difference between the adjacent frames.
The diagram of the proposed vCMU is shown in Figure 3, which accepts two consecutive inputs and generates an output . represents an encoded input image at the current time , represents an encoded input image at previous time . We first apply CMU to generate a new state which contains rich spatial and hidden temporal features. Then, the difference between the consecutive frames is used to directly add to the output of the CMU elementwisely. By having such a residual structure, we explicitly model the dynamic motion information. Given that “” is the parameter of the CMU, the proposed vCMU can be formulated as follows:where represents the formulation of CMU and is regarded as the residual. The proposed vCMU has the same number of parameters as CMU, while it is more powerful in modeling dynamic evolutions.
3.3.2. Hierarchical Asymmetric Structure
During all input sequential frames, the last body pose and movement may provide more dependencies for the prediction [2, 7]. Motivated by this idea, we propose a hierarchical asymmetric structure using the proposed vCMU unit as building blocks. By taking the advantages of this hierarchical asymmetric vCMU blocks, the latest temporal receptive field is enlarged most explicitly.
As shown in Figure 4(a), the hierarchical asymmetric network consists of vCMU blocks and is employed to model the spatiotemporal information and highlights the dynamics of the historical human poses. Comparing with PredCNN [19] which only predicts one future frame, the proposed hierarchical asymmetric structure is a feedforward architecture and predicts all future frames (10 frames) only in one step that can avoid error accumulation.
(a)
(b)
Since repeating the last body pose gave a relatively small error, the proposed asymmetric structure can enhance motion dynamics and consider the different contributions of each given frame. The later frames are more correlated with the future frames. As shown in Figure 4(b), each given frame is equally processed in spatiotemporal information [20]. To predict the future pose, the encoded latest frame passes through 4 vCMU blocks in the symmetric network. On the contrary, our asymmetric network structure handles the different correlations between different given frames and future frames. The encoded latest frame only passes through 2 vCMU blocks in the asymmetric network to keep the features of the last pose. Besides, our network structure can reduce operations by passing through fewer vCMU blocks.
3.4. Output Model
To predict the static and dynamic poses in a relatively weak coupled way, a new loss function combining the static and dynamic information is introduced in the output model to guide the prediction of the future poses. As shown in Figure 5, the output model is composed of two branches. One branch is expected to get the static subpose, and the other is used for the dynamic information. We merge the static information with the dynamic information of the two branches to predict the future pose:
In equation (3), represents the future pose, represents the static information, and is the dynamic information, and they are calculated from the left and right branches separately.
3.4.1. Decoding Dynamics under Supervision
The velocity of a moving joint in space can be decomposed orthogonally into three components, and the three velocity components may be different. Therefore, instead of treating the motion of a joint as an indivisible whole, we decompose the threedimensional motion into three onedimensional motions and distinguishing the motion differences at the finegrained level. Turning now to the modeling of dynamic information, specifically, the three onedimensional velocity components of each joint are predicted as shown in the right branch in Figure 5. The decoder is similar to the encoder, and they all consist of the basic block RMB, but the number of stacked RMB is in each decoder, where is a hyperparameter. We use the decoder to generate velocity from the output of the vCMU hierarchical asymmetric network that contains rich static and dynamic information.
A new loss function combining the static and dynamic information is proposed. A velocity loss is given in the loss function to learning the dynamic information. The loss function is formulated as equation (4):
To better guide the learning of velocity during training, a midlevel supervised signal is constructed to assist training. We introduce the supervision information:where is the groundtruth velocity of the frame to be predicted relative to the last observed frame:
This midlevel supervision makes our framework can separately model the static and dynamic information and introduce different dynamic information for different predicted frames.
4. Experiments
4.1. Datasets
We evaluate our proposed model on two datasets, including G3D [21] and the filter NTU RGB + D (FNTU). G3D: G3D [21] dataset contains 10 subjects performing 20 gaming actions in 7 action sequences captured with Microsoft Kinect. Most sequences contain multiple actions. It consists of 210 samples in total. For a fair comparison, we adopt the same training and test splits as in [20], provided in the released data. FNTU: FNTU dataset [20] is collected from NTU RGB + D dataset. NTU RGB + D dataset [22] contains 60 action classes and 56,880 video samples. Each video sample contains one action. Based on [20], the FNTU dataset consists of 18102 forward skeleton samples that are selected by removing mutual actions from NTU RGB + D dataset. We follow the same training and test sets, which are made publicly available.
4.2. Metrics and Baselines
Metrics: we follow the standard evaluation protocol used in [19, 20], and choose the mean squared error (MSE) and the mean absolute error (MAE) between the predicted frames and the groundtruth frames in the joint coordinate space as two basic evaluation metrics. As mentioned in [4], angles are not a good representation to evaluate motion prediction. We employ the measurement of the Euclidean distance between the groundtruth pose and our predicted pose in the 3D coordinate space as the error metric. Baseline: the pose prediction based on 3D joint coordinate sequences is rarely researched. Three baselines are selected to compare with our method. Specific introductions are as follows: PredCNN: Predictive Learning with Cascade Convolutions [19], an efficient and effective recurrent model for video prediction STE: Symmetric Temporal Encoder [23] converts the mocap frame into the joint coordinate frame in Cartesian coordinates PISEP^{2}: Pseudo Image Sequence Evolutionbased 3D Pose Prediction [20], the stateoftheart performance with the new skeletal representation
We use the released code and data by [20] to reproduce the above baselines, and our model is evaluated on the same datasets. The result will be shown later.
4.3. Comparison with Baselines
In this section, we compare our model to the above baselines. In the experiments, we are given 10 frames to predict the future 10 frames for all datasets. To be consistent with the literature, our model has 4 RMBs in the encoder and 6 RMBs in the decoder. Note that, to avoid overfitting on G3D dataset, the number of RMBs in the encoder and the decoder is reduced to 2 and 3. Our results successfully demonstrate the stateoftheart performance being achieved across all two datasets.
4.3.1. Quantitative Analysis of the Experimental Results
We compare our model to baselines. We show quantitative prediction errors in Table 1. The PredCNN [19] only can predict one frame one time, and the recursive structure causes error accumulation, which leads to the poorest performance. As shown in Table 1, our model achieves stateoftheart performance on both two datasets. The MSE decreases from 0.1199 to 0.1106 on G3D and 0.1210 to 0.1131 on FNTU. The MAE decreases from 1.1101 to 0.9782 on G3D and 1.1651 to 1.0675 on FNTU. PISEP^{2} considers each given frame equally at spatiotemporal information processing in contrast to our vCMU hierarchical asymmetric network that handles the given latest frame as the most relevant to future frames. On the other hand, our model employs a dynamic decoder to decode the dynamic and static information of the motion in a relatively weak manner. The dynamic decoder can help to better capture dynamic information.
4.3.2. Quantitative Analysis of Framewise Results
The framewise performance of different methods is shown in Figure 6 to analyze the performance of each timestep, where the horizontal axis represents frames and the vertical axis represents MSE or MAE of each frame. The mean MSE of predictive poses for each frame is 0.1106 and 0.1131, and the mean MAE of predictive poses for each pose is 0.9782 and 1.0675 on G3D and FNTU, respectively. The experimental results show that PredCNN [19] suffers from error accumulation that leads to the poorest performance. Compared with the stateoftheart PISEP^{2} [20], our method significantly decreases error at all timesteps, especially for the shortterm prediction. However, our method effectively solves the discontinuities in prediction, especially the first prediction frame. This may be due to the coherence of human movements. Paying attention to the last body pose can better predict the following movements since we are repeating the last body pose. As shown in Figure 6, SDnet can significantly enhance the predictive performance on both shortterm and longterm predictions. And, our framework achieves the best performance, which further evidence the effectiveness of our proposed method.
4.3.3. Quantitative Analysis of Jointwise Results
To further analyze the performance of our method, the jointwise performance of different methods of each joint is shown in Figure 7, where the horizontal axis represents frames and the vertical axis represents MSE or MAE of each of the joint. (1) On G3D: in general, the errors of joints of upper limbs are relatively large, especially for the “wrist right,” “wrist left,” “hand right,” and “hand left” on both MSE and MAE. The probable reason for this phenomenon is that the actions on G3D are the upper limbs related actions, and these joints are the most active. Therefore, this may lead to a large error in these joints. Compared with the upper limb joints, the joints of lower limbs or trunk are relatively stable. However, it is interesting that the errors on MAE of the lower limbs and the trunk are approximately close, while on MSE, the errors of lower limbs are larger especially for the “ankle right” and “ankle left.” This may be due to MSE being sensitive to action amplitude. Compared with stateoftheart PISEP^{2} [20], the performance of our method significantly extends their performance at all joints especially the first prediction frame, which demonstrates that SDnet can avoid the discontinuities in prediction. (2) On FNTU: the errors of the joints of the upper limbs are relatively large, and the errors of the joints of the lower limbs or trunk are relatively small, which shows comparable results on G3D. This may be due to intense joint movements of the upper limbs. Compared with other methods, our method achieves the best results again for both MSE and MAE. More specifically, compared with [20], our method outperforms at all joints overall, which further shows the effectiveness of our proposed method to capture dynamic information of the previous poses.
(a)
(b)
(c)
(d)
4.3.4. Qualitative Analysis of the Experimental Results
We visualize the corresponding framewise prediction results in Figure 8. The previous pose and the groundtruth sequences are shown in blue, and the predicted motion are shown in red. Starting from the top, the five sequences correspond to groundtruth, STE, PredCNN, PISEP^{2}, and our work respectively. However, it seems that the results generated by our model are more accurate and reasonable. As shown in Figure 8(a), PredCNN performs the worst. This may be due to the fact that the first frame of future poses is inaccurate, and the error accumulates. However, STE [23] converges to the mean body pose in prediction poses. It is important to note the difference of our method compared with the previous stateoftheart PISEP^{2}. We observe that PISEP^{2} is worse to stay consistent with the groundtruth than ours especially on the first predicted future frame, as shown in the green circle. Not only PISEP^{2} but also some works mentioned in literature [24] are often observed that there is a significant discontinuity especially the first predicted frame. Our SDnet pays more attention to the motionless directions based on the last body pose. Since we repeat the last body pose, it gives a relatively small error. Noted that even PISEP^{2} can avoid the error accumulation, it still worse than ours in the last future frame. The reason can be attributed to that the vCMU can efficiently extract movement trends, propagate information, and benefit modeling dynamic evolutions.
(a)
(b)
We provide a qualitative visualization of framewise prediction on FNTU dataset in Figure 8(b). The experimental results show that our framework can avoid error accumulations. Again, our predictions are closer to the groundtruth than that of the baselines. As shown in the green circle, our model predicts motion more accurately than other models when the people have a big move. Our model captures the spatial information by an LSTMlike block, and the temporal information with a hierarchical asymmetric structure, which consider the different contribution of the previously given frames. Besides, the results generated by other methods converge to the mean body pose. On the contrary, our method can predict motion more reasonably with high dynamics.
4.4. Ablation Studies
To provide a deeper understanding of our method, we next run some ablation studies to evaluate the influence of its components. We use the CMU units instead of vCMU units to study the contributions of vCMU unit in our approach using the same hierarchical asymmetric network. To this end, we compare our approach with a symmetric structure network as shown in Figure 4(b) to study the influence of hierarchical asymmetric structure.
The results of these experiments are provided in Tables 2 and 3. As shown in Table 2, the results increase on G3D and FNTU especially the result of MAE when the CMUs replace the vCMUs. These results show that using our vCMU units provides a significant boost in accuracy.


Finally, we evaluate the importance of using asymmetric structure vs symmetric structure networks. The results of these experiments, provided in Table 3, demonstrate the benefits of using an asymmetric structure. Note that probably because the G3D dataset is relatively small, the result of symmetric structure on MAE works well. But, the hierarchical asymmetric structure network still gets better results on MSE and MAE of FNTU and MSE of G3D dataset.
Altogether, this ablation study evidences the importance of both aspects of our contribution using the vCMU to explicitly model the dynamic motion information and hierarchical asymmetric structure to model the different correlations between different temporal frames and predicted frames [24].
5. Conclusions
This paper presents SDnet, a hierarchical convolutional encoderdecoder architecture for static and dynamic pose predictive learning. Specifically, we introduce a velocitycascade multiplicative unit based on a new residual CNN to explicitly capture the dynamic motion information between the adjacent frames. A hierarchical asymmetric structure using the vCMUs is proposed to predict all future frames in one step, which enhances motion dynamics and models the different contributions of previously given frames. The proposed SDnet model achieves stateoftheart performance on the G3D and FTNU datasets. Our future work will focus on the more accurate weight information of the history human body to SDNet to reach a more accurate prediction.
Data Availability
The code to support the findings of this study is available at https://github.com/liujin0/SDnet.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
Authors’ Contributions
J. Tang and J. Liu contributed equally.
Acknowledgments
The research in this paper used the NTU RGB + D Action Recognition Dataset made available by the ROSE Lab at the Nanyang Technological University, Singapore.
References
 M. Wischnewski, A. Belardinelli, W. X. Schneider, and J. J. Steil, “Where to look next? Combining static and dynamic protoobjects in a TVAbased model of visual attention,” Cognitive Computation, vol. 2, no. 4, pp. 326–343, 2010. View at: Publisher Site  Google Scholar
 J. Martinez, M. J. Black, and J. Romero, “On human motion prediction using recurrent neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2891–2900, Honolulu, HI, USA, July 2017. View at: Publisher Site  Google Scholar
 L.Y. Gui, Y.X. Wang, X. Liang, and J. M. Moura, “Adversarial geometryaware human motion prediction,” in Proceedings of the European Conference on Computer Vision (ECCV), pp. 786–803, Munich, Germany, May 2018. View at: Google Scholar
 W. Mao, M. Liu, M. Salzmann, and H. Li, “Learning trajectory dependencies for human motion prediction,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), Seoul, South Korea, November 2019. View at: Google Scholar
 V. Vukoti, S.L. Pintea, C. Raymond, G. Gravier, and J. C. van Gemert, “Onestep timedependent future video frame prediction with a convolutional encoderdecoder neural network,” in Proceedings of the International Conference on Image Analysis and Processing, pp. 140–151, Catania, Italy, September 2017. View at: Google Scholar
 K. Fragkiadaki, S. Levine, P. Felsen, and J. Malik, “Recurrent network models for human dynamics,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), Seoul, South Korea, November 2019. View at: Google Scholar
 Y. Tang, L. Ma, W. Liu, and W. Zheng, “Longterm human motion prediction by modeling motion context and enhancing motion dynamic,” in Proceedings of the International Joint Conference on Artificial Intelligence (IJCA), Stockholm, Sweden, July 2018. View at: Google Scholar
 A. Gopalakrishnan, A. Mali, D. Kifer, C. L. Giles, and A. G. Ororbia, “A neural temporal model for human motion prediction,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, June 2019. View at: Publisher Site  Google Scholar
 Z. Liu, S. Wu, S. Jin et al., “Towards natural and accurate future motion prediction of humans and animals,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, June 2019. View at: Google Scholar
 S. Hochreiter and J. Schmidhuber, “Long shortterm memory,” Neural Computation, vol. 9, no. 8, pp. 1735–1780, 1997. View at: Publisher Site  Google Scholar
 K. Cho, B. Van Merrienboer, D. Bahdanau, and Y. Bengio, “On the properties of neural machine translation: encoder decoder approaches,” 2014, https://arxiv.org/abs/1409.1259. View at: Google Scholar
 J. Zhang, Y. Zheng, and D. Qi, “Deep spatiotemporal residual networks for citywide crowd flows prediction,” in Proceedings of the ThirtyFirst AAAI Conference on Artificial Intelligence, pp. 1655–1661, San Francisco, CA, USA, February 2017. View at: Google Scholar
 M. Mathieu, C. Couprie, and Y. LeCun, “Deep multiscale video prediction beyond mean square error,” in Proceedings of the International Conference on Learning Representations (ICLR), San Juan, PR, USA, May 2016. View at: Google Scholar
 C. Li, Z. Zhang, W. S. Lee, and G. H. Lee, “Convolutional sequence to sequence model for human dynamics,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5226–5234, Salt Lake City, UT, USA, June 2018. View at: Google Scholar
 A. Van den Oord, S. Dieleman, H. Zeny et al., “A generative model for raw audio,” 2016, https://arxiv.org/abs/1609.03499. View at: Google Scholar
 J. Liu, A. Shahroudy, G. Wang, L. Duan, and A. C. Kot, “SSNet: scale selection network for online 3D action prediction,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 83499–88358, Salt Lake City, UT, USA, June 2018. View at: Publisher Site  Google Scholar
 J. Liu, A. Shahroudy, M. L. Perez, G. Wang, L.Y. Duan, and A. Kot Chichung, “NTU RGB+D 120: a largescale benchmark for 3D human activity understanding,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 6, pp. 1453–1467, 2019. View at: Publisher Site  Google Scholar
 N. Kalchbrenner, A. van den Oord, K. Simonyan et al., “Video pixel networks,” in Proceedings of the 34th International Conference on Machine Learning (ICML), vol. 70, pp. 1771–1779, JMLR, Sydney, Australia, August 2017. View at: Google Scholar
 Z. Xu, Y. Wang, M. Long, and J. Wang, “PredCNN: predictive learning with cascade convolutions,” in Proceedings of the TwentySeventh International Joint Conference on Artificial Intelligence (IJCAI), pp. 2940–2947, Stockholm, Sweden, July 2018. View at: Google Scholar
 X. Liu, J. Yin, H. Liu, and Y. Yin, “PISEP^{2}: pseudo image sequence evolution based 3D pose prediction,” 2019, https://arxiv.org/abs/1909.01818. View at: Google Scholar
 V. Bloom, D. Makris, and V. Argyriou, “G3D: a gaming action dataset and real time action recognition evaluation framework,” in Proceedings of the 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Providence, RI, USA, June 2012. View at: Publisher Site  Google Scholar
 A. Shahroudy, J. Liu, T.T. Ng, and G. Wang, “NTU RGB+D: a large scale dataset for 3D human activity analysis,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, June 2016. View at: Publisher Site  Google Scholar
 J. Butepage, M. J. Black, D. Kragic, and A. H. Kjellstrom, “Deep representation learning for human motion prediction and classification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6158–6166, Honolulu, HI, USA, July 2017. View at: Google Scholar
 C. Li, Z. Zhang, W. S. Lee, and G. H. Lee, “Convolutional sequence to sequence model for human dynamics,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5226–5234, Salt Lake City, UT, USA, June 2018. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2020 Jin Tang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.