Literature Review

Get Started. It's Free
or sign up with your email address
Literature Review by Mind Map: Literature Review

1. Importance

1.1. Deep Imitation Learning of Sequential Fabric Smoothing From an Algorithmic Supervisor

1.1.1. The proposed method has enabled a somulated robot to learn to pull and flatten the clothing iterms mentioned in the paper by learning both pick up points and pull vectors.

1.2. Learning Predictive Representation for Deformable Objects Using Contrasive Estimation

1.2.1. By using a newly introduced loss function and an alternative forward function. the author is able to have a better performance.

1.3. Learning to Smooth and Fold Real Fabric Using Dense Object Descriptors Trained on Synthetic Color Images

1.3.1. They have demostrated that a newly introduced dense descriptor contributes to manipulation polices making for a da Vinci Surgerical robot by learning visula representation of a simulated image.

1.4. Motion2Vec: Semi-Supervised Representation Learning from Surgical Videos

1.4.1. They have successfully implemented a predictive model to learn label characteristics from the known labelled images (which shows the states of the robot)

1.5. Grasping Unknown Objects by Coupling Deep Reinforcement Learning, Generative Adversarial Networks, and Visual Servoing

1.5.1. The author has introduced a unique way to couple largely different simulation and reality images.

1.6. Grasp Prediction and Evaluation of Multi-Fingred Dexterous Hands Using Deep Learning

1.6.1. They have successfully outperformances GraspIt! by their proposed method in terms of Grasping Quality (GQ).

1.7. Simultaneous Tracking and Elasticity Parameter Estimation of Deformable Objects

1.7.1. The author has proposed a unique way to test a physical property of several objetcs by simliarizing a simulated environment of a robot's depressing on the objects with that of a real environment. The result has indicated that their approach is effective and promising.

1.8. Interleaving Planning and Control for Deformable Object Manipulation

1.8.1. They have attempted different routines, including double slits, single pillar and a elastic obstacle. They were succeduuly in those routines, therefore they have provided an alternative pave to figure complex routine scheduling problem of robots.

1.9. Fashion Landmark Detection and Category Classification for Robotics

1.9.1. Compared with previous method, the proposed method has shown a better result in classifying categories of clothes and detecting landmark location.

1.10. Estimating the Material Properties of Fabric from Video

1.10.1. This paper shows the first time of predicting stiffness of fabrics involving in human being's perception, which will pave a way for researchers to study stiffness, 'density' and other physical properties of fabrics.

1.11. Visual Grounding of Learned Physical Models

1.11.1. They have surprisingly use environmental physical settings and physics of their obejcts to predict future trajectories of the objetcs by using a Visual Prior, a Dynmaic Guided inference and a Dynmaic Prior combined method.

1.12. LEARNING PARTICLE DYNAMICS FOR MANIPULATING RIGID BODIES , DEFORMABLE OBJECTS , AND FLUIDS

1.12.1. Compared with other dynamic simulators, the method the author has claimed is the most effective one.

1.13. Four Novel Approaches to Manipulating Fabric using Model-Free and Model-Based Deep Learning in Simulation

1.13.1. Whthin their four ways, 2 out of them are based on real images and applications. These are worth researching and able to be applied on my current research.

1.14. Visual Vibrometry: Estimating Material Properties from Small Motion Videos

1.14.1. Instead of investigating on perceptions part of fabrics, the author digged deeply into the nature of fabrics: trying to predict physical properties from their frequencies.

1.15. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks

1.15.1. Their proclaimed method surpasses currently existiing ConvNets and shows a higher effiency due to higher accurancy and lower FLOP numbers.

1.16. A Constraint-Aware Motion Planning Algorithm for Robotic Folding of Clothes

1.16.1. They have defined a G-fold and a G-Drag states to define different states and different procedures of folding. Their unique and reliable way has gauranteed a successful and stable gripping.

1.17. Optimized Object Detection Technique in Video Surveillance System Using Depth Images

1.17.1. The author found that under low light intensity environment, the depth images captured outperformance the RGB images captured.

1.18. Gold Volatility Prediction using a CNN-LSTM approach

1.18.1. The method the author has proposed an investigation of the possibility of applying a CNN-LSTM architecture in FINANCIAL sector.

1.19. Recurrent World Models Facilitate Policy Evolution

1.19.1. The author has applied their method on Two scenarios: a. Car Racing b. A Fire-Shooting Game. They found that applying a recurrent method increase the performace of reinforcement lelarning.

1.20. Relational Reinforcement Learning with Guided Demonstrations

1.20.1. They are the first to find a suboptimal demostration ans demostrating as required method to demostrate robots learning to pick up and place objects, and those are surspring.

1.21. Latent Space Roadmap for Visual Action Planning of Deformable and Rigid Object Manipulation

1.21.1. They have found that with an assistance of action their algorithm becomes more effective and their experiments include a T-shirt Folding experiment, which will help my research.

1.22. Benchmarking Bimanual Cloth Manipulation

1.22.1. They have provided a test crieria for evaluating manipultion performance of dually armed robots and my research can be inspired from their studies.

1.23. A Learning Method of Dual-arm Manipulation for Cloth Folding Using Physics Simulator

1.23.1. They just paved a simulation approach to study physical properties of clothes and manipulation of robots virtually.

1.24. EMD Net: An Encode–Manipulate–Decode Network for Cloth Manipulation

1.24.1. The claimed approach does not require a reinforcement learning to learn to pick up clothes but combines manipulation schedule with a latent space of a start state of a piece of clothing to obtain a final decoded state of the piece of clothing.

1.25. Network Dissection: Quantifying Interpretability of Deep Visual Representations

1.25.1. The author has introduced an import cocept of judging the preformance of a CNN, which is the interpretability of a CNN. They have tested the influence of the bias values and other factors that may effect the performance of interpretability of a CNN. I should include this test criteria and related concepts to test the performace of my Net structure.

1.26. Static Stability of Robotic Fabric Strip Folding

1.26.1. They have revealed instability problems uderlying fabric folding technology, which are 'snap-through' and the effect of frication. My research can be benefited if I can find a critical point and the effect of frication in fabric garment folding.

1.27. SQUEEZENET: ALEXNET-LEVEL ACCURACY WITH 50X FEWER PARAMETERS AND <0.5MB MODEL SIZE

1.27.1. They have claimed a compressed AlexNet version with the same performance with AlexNet but requiring less memory load. This makes it possible to load the new claimed neural network on in-chip FPGAs.

1.28. Position and orientation distribution of wrinkles

1.28.1. Unlike others focus on visual reprenstation of clothes to classify clothes, the author has claimed a method to look into wrinkles, fabrics and overlaps of clothes to describe and classify clothing, from which they have obtained a higher performance of classification compared with several previous papers.

1.29. Cloth in the WInd: A Case Study of Physical Measurement through Simulation

1.29.1. The author has claimed a project which is similar to my 'reality-dream closed loop', so this paper can attribute to my research.

1.30. Seeing the Wind: Visual Wind Speed Prediction with a Coupled Convolutional and Recurrent Neural Network

1.30.1. The author has proclaimed a similar architecture compared with my work, while they are predicting wind speeds but I am predicting clothing dynamic changes and categories.

1.31. Learning-based Cloth Material Recovery from Vide

1.31.1. The author found a unique way to test the bend and stretch nature of clothes, and they looked deeply into the physical properties of clothes.

1.32. Estimating Cloth Simulation Parameters from Video

1.32.1. The author has developed a way to investigate the physical properties of clothes at a very early stage by using merely an intel Pentium 4X core computing process unit (CPU) (although took 50 hours to finish)

1.33. ImageNet Classification with Deep Convolutional Neural Networks

1.33.1. The author has used a newly-introduced dropout layer to overcome the problem of overfitting which always hinders a deep convolutional neural network. They has paved a path to overcome overfitting problem in two ways:1. Data Augment 2. A Introduction of Dropout Layers, which showed a great help to tackle the problem.

1.34. Deep Residual Learning for Image Recognition

1.34.1. Their work is succesful in a large dateset of a wide vartety. In some small dataset, their architecture may be faced with the problem of overfitting.

1.35. Very Deep Convolutional Networks for Large-Scale Image Recognition

1.35.1. The author has successfully increased the accuracy to a higher values and very first time to declare the use of smalll size convolutional layers.

1.36. Learning a visuomotor controller for real world robotic grasping using simulated depth images

1.36.1. The author has used the depth images to detect the distance between the grippers of the robot and the object, which reflects that the author was considering use a dynamic way to discover the grasp points.

1.37. Learning Depth-Aware Deep Representations for Robotic Perception

1.37.1. Usually the majority of researchers will choose to combined RGB and Depth images together to extract a convolutional representations, while in their case, they separately used depth images to learn the dilation factor as a parmater to faciliate the neural netrok to dig deeply into the classification world of robotics.

1.38. Robot-Aided Cloth Classification Using Depth Information and CNNs

1.38.1. The author has succesfully classified the clothes by their shapes rather than by their textures.This is a first poineer in my research field.

1.39. Pose and category recognition of highly deformable objects using deep learning

1.39.1. The author has used the depth images to detect the shapes of the garments and from the shape of the garments to detect the pose estimation of the garments, which have shown a high rate of classification.

1.40. Convolutional-Recursive Deep Learning for 3D Object Classification

1.40.1. The author synthetically takes advantage of both RGB and Depth images to classify the 3D images, and their preformace is the best among other methods.

1.41. Inferring the Material Properties of Granular Media for Robotic Tasks

1.41.1. The author has succesfully studied the physical properties of the grandular medias by their groundly poured textures which are an innovative approach to the academia and the industry.

1.42. Hindsight for Foresight: Unsupervised Structured Dynamics Models from Physical Interaction

1.42.1. The author has successfully instroduced and implemented reasoning an unsupervised learning method of the robot to interact with the outside environemnt by exploring the physical properties of the objetcs.

1.43. Transient Behavior and Predictability in Manipulating Complex Objects

1.43.1. The author has successfully decreased the time required to reach the equbrium and the stability. They have found that the introduction of the participants with a 'human-robotic' machnism help with this procedure.

1.44. Assistive Gym: A Physics Simulation Framework for Assistive Robotics

1.44.1. The author has introduced using the pybullet to train a policy for an assitive robot to assit a human to conduct six different tasks.

2. Definition

2.1. Learning Predictive Representation for Deformable Objects Using Contrasive Estimation

2.1.1. The author proposed an unprecedent approach to enable a robot to learn to flatten some Deformable items by using a Constrasive loss funcation and a PlaNet forward function, and then the trained result is compared with other approaches and outperferms over others.

2.2. Learning to Smooth and Fold Real Fabric Using Dense Object Descriptors Trained on Synthetic Color Images

2.2.1. In this paper, the authors have proposed a method to learn visual representation of an image from a Dense descriptor. They has find corners of a cloth from those visual representations, which can be used in a policy-making process for grasping of clothes.

2.3. Motion2Vec: Semi-Supervised Representation Learning from Surgical Videos

2.3.1. In this paper, the author has proposed a method to take adavantage of LSTM to learn to predict the labels of unlabeling images by learning from previous labeling images. and they have tested their method on a da Vinci Surgical robot arm.

2.4. Grasping Unknown Objects by Coupling Deep Reinforcement Learning, Generative Adversarial Networks, and Visual Servoing

2.4.1. The author has proposed an unpredecent apporach to apply the knowledge learned from simulation to reality by coupling images from simulation with images from reality in a way of applying 'CYCLE GANs' and Deep Reforcement learning.

2.5. Grasp Prediction and Evaluation of Multi-Fingred Dexterous Hands Using Deep Learning

2.5.1. The author has proposed a method to learn to find optimal grasp point on a Shadow Deterous Hands by applying a mothod called Grasp Prediction Network (GPN) consisting of Convolutional Layers and Guassian Mixture Model (GMM), and they have compared their methods with GraspIt! in Gazabo Simulator.

2.6. Simultaneous Tracking and Elasticity Parameter Estimation of Deformable Objects

2.6.1. The author has proposed a method to evaluate elasticity of several objects by comparing a robot's depressing on a objcts in a simulated environment and that in a real environment and reducing the differences between them, and their method is called 'Finite Element Method.

2.7. Interleaving Planning and Control for Deformable Object Manipulation

2.7.1. The author has proposed a interlaying controlling and planing method for robot to grasp a piece of deformable object to avoid pre-set obstacles by predictiing deadlock, estimating gross motion and predictiing overstrencth.

2.8. Fashion Landmark Detection and Category Classification for Robotics

2.8.1. In this paper the author has claimed a method to do garment classification and landmark of clothing detection. They have designed a architecture that contains a rotation invariance encoder, a landmark localizing branch andd a attention branch.

2.9. Estimating the Material Properties of Fabric from Video

2.9.1. In this paper, the author has claimed a classification method to predict stiffness and area weight ( referred as density of fabrics) and involved in human feelings of stiffness of clothes.

2.10. Visual Grounding of Learned Physical Models

2.10.1. The authos has unpredecently proposed a method to predict refined postion, enviromental physical position, rigidness and future trajectories of a rigid object by using a multilayer perception (MLP) neural network. They have compared their reults with DensePhyNet, which performed worse than their method. They called their method as 'Visually Grounded Physics Learner' (VGPL).

2.11. LEARNING PARTICLE DYNAMICS FOR MANIPULATING RIGID BODIES , DEFORMABLE OBJECTS , AND FLUIDS

2.11.1. The author haa introduced a method to pariclelize a dynamical objects by finding prpoer verticles and root of a cluster and also finding their future shape.

2.12. Four Novel Approaches to Manipulating Fabric using Model-Free and Model-Based Deep Learning in Simulation

2.12.1. UC Berkeley Research Group Biar has claimed four unique ways of flattening clothes, 2 out of which are model-based methods and the other 2 are model-free methods. Their goal is to succeed in learning to flatten clothes from low covergae to high coverage.

2.13. Visual Vibrometry: Estimating Material Properties from Small Motion Videos

2.13.1. The author has proposed a method to use frequency variarions of different materials to predict material properties of clothes from their videos.

2.14. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks

2.14.1. In this paper, the author has claimed a highly effective ConvNet architecture to scale up width, depth and image resolution (image size) compoundly to obtain higher classification accurancy and fewer FLOP numbers compared with ResNet 50-152, Google ImageNet, DenseNet 169,264 and some other ConvNet architectures.

2.15. A Constraint-Aware Motion Planning Algorithm for Robotic Folding of Clothes

2.15.1. The author has designed a sequece of motion primitives to fold different clothes. They have testd their method on a simulated environemnt and a real enviroment.

2.16. Optimized Object Detection Technique in Video Surveillance System Using Depth Images

2.16.1. The author has compared use of RGB images, infrared images and depth images for video surveillance, where there is a problem of low light intensity.

2.17. Gold Volatility Prediction using a CNN-LSTM approach

2.17.1. To predict gold volatility in Fanancial Market, The autho has proposed a CNN-LSTM architecture, which inputs images and outputs gold volaitiliy. They have used a GARCH, SVR, LSTM an CNN combined architecture to predict volatility of gold.

2.18. Recurrent World Models Facilitate Policy Evolution

2.18.1. In this paper the author has provided a V-M-C (Variational AutoEncoder- LSTM Controller Model) to learn policy by a reinforcement learning method.

2.19. Relational Reinforcement Learning with Guided Demonstrations

2.19.1. In this paper the author has claimed a suboptimal teacher-demostration method to demostrate robots to learn to complete several tasks.

2.20. Latent Space Roadmap for Visual Action Planning of Deformable and Rigid Object Manipulation

2.20.1. The proposed method is to combine embedded image latent space and action latent space to find a path for a robot to complete a series of actions, of which the architecture is splited into visual planning and action planning part. There is a VFW-APN struture to complete desired actions.

2.21. Benchmarking Bimanual Cloth Manipulation

2.21.1. The author has defined several states of clothing and they are making benmark to test folding towl, folding t-shirt an spreading a tablecloth and find the errors. The Cloth states include 'being grasped at one point', 'being grasped at two points', 'already folded' and 'curmpled', and the mainuplation stage is devided into 'grasp one point;, 'grasp two points' and 'manipulation'. They have compared the performance from starting at different states.

2.22. A Learning Method of Dual-arm Manipulation for Cloth Folding Using Physics Simulator

2.22.1. There are three parts in the experiment: the first part is to find a proper representative of actul clothes in a phyical simulator called Blender; and the second part is to find an optimized manipulation trajectory and respective grasping points in the simulated environment, and the third one is to apply gripping points and manipulation trajectories on a real robot to verify the validity of their method.

2.23. EMD Net: An Encode–Manipulate–Decode Network for Cloth Manipulation

2.23.1. The author has claimed a noval approach to apply a Encoder-Manipulation-Decoder method to tackle cloth fold problem. They have use a start state and a goal state to generate manipulation schedule.

2.24. Network Dissection: Quantifying Interpretability of Deep Visual Representations

2.24.1. The author has claimed a judge criteria to study the interpretability of several CNNs (GoogLeNet, VGG 16 Net, AlexNet and ResNet), where they have introduced a 'detector' to describ the preformance of interpertability. They have also compared the importance of interpertability with other aspects of the testing criterial to judge the performance of a CNN.

2.25. Static Stability of Robotic Fabric Strip Folding

2.25.1. The author has studied folding instability of fabric strip, where he found that a R-path folding is more effective in reality. In order to study instability, he has found a critical point where the fabric strip is 'snapped' by a gripper of a robot ( in a process called 'continuation'), and this critical point helps with finding the best folding path for fabric strip folding.

2.26. SQUEEZENET: ALEXNET-LEVEL ACCURACY WITH 50X FEWER PARAMETERS AND <0.5MB MODEL SIZE

2.26.1. In order to reduce memory requirements of AlexNet and compress AlexNet to a more flexible structure, the authors have proclaimed a compression-based SqueezeNet to perfomance as well as AlexNet but require less memory load. They have introduced a 'fire module' to achieve the goal and a 'bypass connection' to improve the performance.

2.27. Clothing Classification Using Image Features Derived from Clothing Fabrics, Wrinkles and Cloth Overlaps

2.27.1. The author in this paper described a unique method to classify clothes based on their wrinks, cloth fabrics and cloth overlaps. They have introduced two images: maximum orientation image and maximum magnitude image to descirbe features. They have found four methods to analysis the feature descriptor: Position and Orientation Distribution of wrinkles (DST-W), Clothing Fabrics and Wrinkle Density (CF-WD), Existence of Clothing Overlaps (OVLP) and Scale Space Extremea (SSEX).

2.28. Cloth in the Wind: A Case Study of Physical Measurement through Simulation

2.28.1. The author have problaimed to infer fabric parameters by combine simulation and reality together (i.e a Simualtion-Realtiy architecture), where they have been trying to use a Spectral Decomposition Network (SDN) to tranfer the images to frquency-domain maps by a Discret Fourier Transfer and input the maps into a ResNet block. They have compared the latent properties of simulated and real images and make the simulation images similar to reality images.

2.29. Seeing the Wind: Visual Wind Speed Prediction with a Coupled Convolutional and Recurrent Neural Network

2.29.1. The author has claimed a ResNet18-LSTM architecture to predict WInd Speed. They have compared their results with real data, and from which they have taken more considerations into some disturbance within the environment off-settings.

2.30. Learning-based Cloth Material Recovery from Vide

2.30.1. The authors have proclaimed a method to investigate classification of strentch ratio and bend ration of clothes by a CNN-LSTM structure. They have tested their performance with some baselines and found a unique way to generate a synthetic dataset from a small amount of videos.

2.31. Estimating Cloth Simulation Parameters from Video

2.31.1. The author has claimed an approach to couple the parameters of simulated images and those of real images by comparing their angular maps and silhouette mismatch to optimize the configuration of simulated images.

2.32. ImageNet Classification with Deep Convolutional Neural Networks

2.32.1. The author has proclaimed a very famous deep convolutional nerual network called 'AlexNet', which consists of five convolutional layers and three fully connected layer combined with two crrespnding dropout layers. They have successfully overcome the problem of overfittlng by data augment and an introduction of dropout layer. Their result has shown the best performance among any other convolutional networks at their times.

2.33. Deep Imitation Learning of Sequential Fabric Smoothing From an Algorithmic Supervisor

2.33.1. Five policies has been used to smoothing wrinkled clothing items and one has been selected as a police to pull the clothing items and a simulated da Vinci Research robot learned to pick-up points and pull vectors by a reinforcement learning method.

2.34. Deep Residual Learning for Image Recognition

2.34.1. The author has proclaimed a Deep Residual Net which takes an identity shortcut between very deep convolutional layers. Meanwhile, they have attempted to extend thier layers by a bottleneck block which exntends the net from 34 to 101 and 152. Their proposed architecture has overcome the problem of overfitting.

2.35. Very Deep Convolutional Networks for Large-Scale Image Recognition

2.35.1. The author has proposed a feature extraction method where they deepened the convolutional layers and increase the accuracy by adding small size convolutional filter (3*3) compared with the previous work which took a large convolutional size and stride. The output size of the feature space is 512, which outnumbered that of AlexNet. They called their Net as 'VGG 16 Net'.

2.36. Learning a visuomotor controller for real world robotic grasping using simulated depth images

2.36.1. The author has opted depth images to help a UR5 robot with grasping objects. They have used the depth images to detect the distance between the gripper and the objetc, and then they attampted to find the closted point to grasp from the Depth images. They found that their controller was robust to serval disturbances in the grasping when the objects were shifted on a piece of paper.

2.37. Learning Depth-Aware Deep Representations for Robotic Perception

2.37.1. The author has used extract features from the depth images to learn a dilation factor to expand the kernel size of filters to boost the the regression tasks and classification taks in pixel-level robotic preception.

2.38. Robot-Aided Cloth Classification Using Depth Information and CNNs

2.38.1. The author has claimed a way to use depth images to do shape classfication, where they have used a robot to grasp some clothes of different shapes and rotate the clothes in front of a depth camera to provide presepctives of the clothes to be used in the a cnovolutional neural network to boost the classification of shapes.

2.39. Pose and category recognition of highly deformable objects using deep learning

2.39.1. The author has used a hierarchical deep convolutional neural network with the depth images of the garments to predict the shapes and estimation pose of the garments. Firstly they classified the shape of the garments, and secondly they classified the pose estimation of the garments based on the result of their shape classification.

2.40. Convolutional-Recursive Deep Learning for 3D Object Classification

2.40.1. The author has claimed an approach that uses a RNN-CNN architecture to classify the 3D objects by the RGB-Depth images with a tree-structured and randomly assigned weights of the recursive neural network. Compared with the state-of-art, their method outperformace the rest of other approaches.

2.41. Inferring the Material Properties of Granular Media for Robotic Tasks

2.41.1. The author has attempted to estimate the firction indexes, the coefficients of restitution of grandular media by pouring these grandular particles on the ground in both a simulation environment and a real environment with a likehood free Baysian interface with the grandular medias' depth images. They have took advantage of the poured cearal barley textures on the ground and they have used the algorithm to analyse the texture. They finally verify their approach by using an ABB YuMi robot to pour the particles in a real environment to estimate their physical properties.

2.42. Hindsight for Foresight: Unsupervised Structured Dynamics Models from Physical Interaction

2.42.1. The author has proposed a neural network called 'Hind4sight Net', which consists of two models: the forward model and the inverse model. The authot attempted to predict the future 3D flow vector combined with a 2D image with the prior knowledge of the 'poke' status in the forward model. The author attempted to predict the 'poke' status by taking the current 3D point cloud object depth images and the t+1 3D point cloud object depth images into the considerations. They have conducted their experiments in both a simulation and a real environment.

2.43. Transient Behavior and Predictability in Manipulating Complex Objects

2.43.1. The author has claimed a method to configured the initial locations of a ball under a cup to minimumize the time to make it stable and they have introduced a involment of some real people to verify their ideas.

2.44. Assistive Gym: A Physics Simulation Framework for Assistive Robotics

2.44.1. The author has introduced a reinforment method fot a simulated robot to assist a simulated human to complish six project tasks in a Pybullet Environment and learn the relative policies with a reinforment learning method. They have called their approach as 'Assistive Gym'.

3. Tools

3.1. Deep Imitation Learning of Sequential Fabric Smoothing From an Algorithmic Supervisor

3.1.1. Five State-of-Art policies, a deep imitation learning (a reinforcement-learning-related method) and some clothing items, and a da Vinci Robot.

3.2. Learning Predictive Representation for Deformable Objects Using Contrasive Estimation

3.2.1. Contrasive Loss Function, PlaNet Forward Function, Simulation Environment, a PR2 Robot.

3.3. Learning to Smooth and Fold Real Fabric Using Dense Object Descriptors Trained on Synthetic Color Images

3.3.1. Blender 2.8 (for generatiing simulated images), a da Vinci Surgerical robot (for verfying the validity of the proposed dense descriptor), and an ABB YuMi robot also for verifying the validity of the proposed method.

3.4. Motion2Vec: Semi-Supervised Representation Learning from Surgical Videos

3.4.1. an LSTM model. a deep embedding neural network, a da Vinci surgical robot and images captured from arms of different states of the robot.

3.5. Grasping Unknown Objects by Coupling Deep Reinforcement Learning, Generative Adversarial Networks, and Visual Servoing

3.5.1. a Deep Reinforcement Learning, an CYCLE GANs and a visual servoing.

3.6. Grasp Prediction and Evaluation of Multi-Fingred Dexterous Hands Using Deep Learning

3.6.1. Convolutional Layers, a Gaussian Mixture Model and Fully-Connected Linear Layers

3.7. Simultaneous Tracking and Elasticity Parameter Estimation of Deformable Objects

3.7.1. A Robot, a foam block, a complex-shaped plush tony, a soft ball, a finite element method and a robot environment

3.8. Interleaving Planning and Control for Deformable Object Manipulation

3.8.1. Some deformable objects, and a method called REF.

3.9. Fashion Landmark Detection and Category Classification for Robotics

3.9.1. Data Augmentation, A rotation invariance encoder, a landmark branch and a attetion branch. And some images from Google and data augmentation.

3.10. Estimating the Material Properties of Fabric from Video

3.10.1. Some kinds of Fabrics, a classification method and human questionnaires.

3.11. Visual Grounding of Learned Physical Models

3.11.1. A NVIDIA Flex Simulator, and a Visually Grounded Physics Learner (VGPL).

3.12. LEARNING PARTICLE DYNAMICS FOR MANIPULATING RIGID BODIES , DEFORMABLE OBJECTS , AND FLUIDS

3.12.1. Fluid, Elastic Objects and Rigid Objects

3.13. Four Novel Approaches to Manipulating Fabric using Model-Free and Model-Based Deep Learning in Simulation

3.13.1. Simulated Fabrics, Real Fabircs and a Surgical Robot Called Da Vinci Surgical Robot.

3.14. Visual Vibrometry: Estimating Material Properties from Small Motion Videos

3.14.1. Clamped Rods, Fabrics and a Speaker

3.15. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks

3.15.1. ConvNets, 3.5 Billion Training Images from Instagrams, and Large-Scale Computers.

3.16. A Constraint-Aware Motion Planning Algorithm for Robotic Folding of Clothes

3.16.1. Some clothes, a Simulation Software and a Real PR2 robot.

3.17. Optimized Object Detection Technique in Video Surveillance System Using Depth Images

3.17.1. RGB Images, Depth Images, Infrared Images and a ' You Only Look Once' (YOLO) technology.

3.18. Gold Volatility Prediction using a CNN-LSTM approach

3.18.1. Granmian Angular Field, Markov Transition Field, ad a CNN-LSTM architecture.

3.19. Recurrent World Models Facilitate Policy Evolution

3.19.1. A Variational AutoEncoder, an Long Short-Term Memory and a Controller.

3.20. Relational Reinforcement Learning with Guided Demonstrations

3.20.1. Learning Policies, Suboptimal Learning Policies and Demostration Policies.

3.21. Latent Space Roadmap for Visual Action Planning of Deformable and Rigid Object Manipulation

3.21.1. An APN-VFW structure, a Robot, a stacking box and a T-shirt

3.22. Benchmarking Bimanual Cloth Manipulation

3.22.1. A Towel, A Tablecloth and Robots and A T-shirt.

3.23. A Learning Method of Dual-arm Manipulation for Cloth Folding Using Physics Simulator

3.23.1. Towel, T-shirt, Trouser, Blender and Robot.

3.24. EMD Net: An Encode–Manipulate–Decode Network for Cloth Manipulation

3.24.1. Clothes, Robot and Blender 3D Editor.

3.25. Network Dissection: Quantifying Interpretability of Deep Visual Representations

3.25.1. VGG 16, ResNet, AlexNet and GoogLeNet.

3.26. Static Stability of Robotic Fabric Strip Folding

3.26.1. Clothes, Simulation Environment and Robot.

3.27. SQUEEZENET: ALEXNET-LEVEL ACCURACY WITH 50X FEWER PARAMETERS AND <0.5MB MODEL SIZE

3.27.1. ImageNet.

3.28. Clothing Classification Using Image Features Derived from Clothing Fabrics, Wrinkles and Cloth Overlaps

3.28.1. Clothes.

3.29. Cloth in the Wind: A Case Study of Physical Measurement through Simulation

3.29.1. Flags, Cameras, Blenders and ArcSimulation.

3.30. Seeing the Wind: Visual Wind Speed Prediction with a Coupled Convolutional and Recurrent Neural Network

3.30.1. Trees from Lancaster, California, Anemometers and Flags.

3.31. Learning-based Cloth Material Recovery from Vide

3.31.1. ArcSim, Blender and Some Background Photo.

3.32. Estimating Cloth Simulation Parameters from Video

3.32.1. Real Clothes and Simulated Clothes.

3.33. ImageNet Classification with Deep Convolutional Neural Networks

3.33.1. ImageNet LSVRC-2010.

3.34. Deep Residual Learning for Image Recognition

3.34.1. CIFRA 10, and COCO Object Detection Dataset.

3.35. Very Deep Convolutional Networks for Large-Scale Image Recognition

3.35.1. ILSVRC-2012 Dataset

3.36. Learning a visuomotor controller for real world robotic grasping using simulated depth images

3.36.1. Several Objetcs, a UR5 robot and a piece of paper.

3.37. Learning Depth-Aware Deep Representations for Robotic Perception

3.37.1. NYU Database.

3.38. Robot-Aided Cloth Classification Using Depth Information and CNNs

3.38.1. Trouser, Shirt, Tower, Polo.

3.39. Pose and category recognition of highly deformable objects using deep learning

3.39.1. Garment Images of Depth.

3.40. Convolutional-Recursive Deep Learning for 3D Object Classification

3.40.1. 3D Objects images.

3.41. Inferring the Material Properties of Granular Media for Robotic Tasks

3.41.1. Cereals, Barleys and Pills and so on.

3.42. Hindsight for Foresight: Unsupervised Structured Dynamics Models from Physical Interaction

3.42.1. Real Model Cube, Simulated Model Cube, and a KUKA Manipulator Robot.

3.43. Transient Behavior and Predictability in Manipulating Complex Objects

3.43.1. Cups, Balls, and Paticipant.

3.44. Assistive Gym: A Physics Simulation Framework for Assistive Robotics

3.44.1. PyBullet, and GYM.