The Blender simulations, while quite good, didn't allow me to test the robot's walking gaits on the ground, I do believe it's possible, but I couldn't get it to work. As I gave up on blender, I decided to use V-REP, a software I had installed previously but had no patience to use it properly.
First I looked at the available API, as a Python one existed, I decided to give it another try:
First, I needed to install it, which was quite easy:
wget http://coppeliarobotics.com/V-REP_PRO_EDU_V3_2_0_rev6_64_Linux.tar.gz tar -zxvf V-REP_PRO_EDU_V3_2_0_rev6_64_Linux.tar.gz rm V-REP_PRO_EDU_V3_2_0_rev6_64_Linux.tar.gz
And to open it:
cd V-REP_PRO_EDU_V3_2_0_rev6_64_Linux ./vrep.sh
This shows an interface like this, where it's possible to load some ready robots and models and play around.
The next logical step was getting my robot into the simulator. There are some quite good tutorials on their website here. Following the Hexapod tutorial and importing my .stl files allowed me to assemble to robot quite quickly. The physical setup, however, is a little tricky, all the physical processing is usually done on a simplified mesh, and the real-life mesh is parented to this phyisical mesh. In the end I had this(notice the object hierarchy on the left):
All the objects in the scene have their own "Rigid Body Dynamics" dialog, like this:
In the end, the real meshes are purely aesthetic, they are configured as dynamically static objects, parented to their dynamic counterparts.
The dynamic parts, on the other hand, are not static and are "respondable", it's important to note that collision/intersection betwen two of the respondable parts in your robot can result in quite weird behaviors(I witnessed something that looked like ghosts were flipping my robot upside down), hence I suggest clearing the local respondable mask except in the case when you absolutely need a collision between two pieces.
With the robot physically working, I needed control of the joints, this was achieved by setting each joint to "torque/force mode" and using the remoteApi function "vrep.simxSetJointTargetPosition()"
So the final Robot class code was like this:
class VirtualRobotVrep(Robot): .... def __init__(self): vrep.simxFinish(-1) # just in case, close all opened connections self.clientID = vrep.simxStart('127.0.0.1', 19997, True, True, 5000, 5) vrep.simxStartSimulation(self.clientID,vrep.simx_opmode_oneshot) vrep.simxSynchronous(self.clientID,True) def finish_iteration(self): vrep.simxSynchronousTrigger(self.clientID)
On the constructor we close any existing connections, estabilish a new one, and then set it to synchronous mode, so every code iteration corresponds to one physical step on the simulator.
The handles are loaded like this:
def get_joints(self): if self.clientID != -1: errorCode, handles, intData, floatData, array = vrep.simxGetObjectGroupData(self.clientID, vrep.sim_appobj_object_type, 0, vrep.simx_opmode_oneshot_wait) data = dict(zip(array, handles)) fl_leg = dict((key, value) for key, value in data.iteritems() if "fl" in key and "joint" in key) fr_leg = dict((key, value) for key, value in data.iteritems() if "fr" in key and "joint" in key) rr_leg = dict((key, value) for key, value in data.iteritems() if "rr" in key and "joint" in key) rl_leg = dict((key, value) for key, value in data.iteritems() if "rl" in key and "joint" in key) return fl_leg, fr_leg, rr_leg, rl_leg return None
First all objects are read and have the names and handles converted into a dictionary, then the ones containing both "joint" and the leg position in the name are extracted into four smaller dictionaries, one for each leg, which are passed to the leg's constructor.
And the leg code is even simpler:
class VirtualLegVrep(Leg): .... def __init__(self, name, handles, clientID, position, resting_positions): Leg.__init__(self, name, position, resting_positions) self.handles = handles self.clientID = clientID for key in self.handles: if "shoulder" in key: self.shoulderHandle = self.handles[key] elif "femur" in key: self.femurHandle = self.handles[key] elif "tibia" in key: self.tibiaHandle = self.handles[key] def move_to_angle(self, shoulderAngle, femurAngle, tibiaAngle): vrep.simxSetJointTargetPosition(self.clientID, self.shoulderHandle, shoulderAngle, vrep.simx_opmode_oneshot) vrep.simxSetJointTargetPosition(self.clientID, self.femurHandle, femurAngle*self.ydirection, vrep.simx_opmode_oneshot) vrep.simxSetJointTargetPosition(self.clientID, self.tibiaHandle, tibiaAngle*self.ydirection, vrep.simx_opmode_oneshot)
It just takes each joint from the handles, and calls the proper simulator instruction to move it's simulated counterpart to the desired angle, and invert this angles orientation when necessary(due to different joints on real life, Blender, and V-REP).
With everything in place, we can try the code!