I am unfamiliar with an 2D Equilateral projection and I don't use Blender. I will guess that you meant "Equirectangular", where the vertical mapping linearly maps the latitude range -90 to +90 degrees to the pixel height of your image and maps the longitude range -180 to +180 degrees to pixel width according to the relation 1/cos(lat) (that is, progressively more stretched toward the poles). In this case, the UV (longitude,latitude) outputs have a simple relationship to things on the globe as described above.

Your first image looks like it has a texture atlas stitched together onto a single image with some external mapping defining the texture-to-sphere coordinates. If you have access to this mapping, you should be able to transform it to points on the sphere and then split up your sphere into some uniform set of triangles and project the pixels from those triangles onto an equirectangular bitmap. In that case, more triangles would give better results, right up to the point where you have triangles that occupy a single pixel in the output image.

Another option would be to take a few images in Blender using an Orthographic projection (view of the sphere from infinity) down each axis of the sphere and then feed them into program that will convert those images into an equirectangular bitmap (I'm partial to ReprojectImage from http://fracterra.com/ReprojectImage.zip mostly because I wrote it and am familiar with it). This sort of thing can be done with many of the common panorama stitchers.

Blender should be able to render cube maps and that cube maps can generally be converted into equirectangular without too much effort. See your local search engine for more information. Your best bet in that case is to put the camera at the center of your sphere, because then the entirety of the generated cube map will be your sphere.