Wednesday, 22 May 2019

Mini Melbourne Step 2a - Getting the World Failed Attempts

Alright, so in the last post we talked about how we started looking at this project from the perspective of the 'Dig Experience' and that the building of the city was a secondary objective. These two 'projects' of course were in 'co-development' as the dig would be 'set' within the Mini Melbourne world. However, once we had nailed the basics of the dig, and determined that it was going to be possible, we looked at what portion of the city we were going to get into Minecraft. We knew we had 3D data of the entire city, but choosing a smaller section and testing out the process for getting that data in was important.

This map shows the initial planned data for import, and a small subset of that was given to me to do all sorts of testing and investigations on. The slab I was testing appears to be from St Paul's Cathedral along Flinders Street (north side) towards the east.


We were not in uncharted territory, and many people have their own unique workflows to bring data, big data, into Minecraft. So, I turned to the community and two of my favourite online Minecraft experts, Adam Clarke and Adrian Brightmore showed me how they get 3D data into Minecraft, in multiple ways. Without those two and their expertise, the project would have been a lot more difficult. So, I thanked them back then, and I am going to do it again right now. Now that the project has been released into the wild I am so pleased with how it turned out, and that we actually managed to get the project over the line.

Without the willingness of both of these amazing guys, I am not sure it would be as big as it currently is, and as easily expandable as I know it is currently. With all the sincerity I can muster, thanks guys, this project started with freely shared community knowledge, and ends with a freely available resource for the entire world to use and explore, create and share across all major Minecraft platforms.

So after my discussions with Adam and Adrian, I went back to the people that had the data, and we talked about what sort of format they had the data in, and what sort of format they could easily get it in. It took us quite a bit of time to nail it down, but in the end we now have a workflow from their data, into my hands, in such a way that I could easily bring it into Minecraft world, that was easily expandable for when we want to include more of the city in future updates.

So, how did we do it? In this post, as the title implies, I will share the explorations I started with, and what issues these workflows caused, and then in the next post we will talk about the actual workflow. All of this data was given to me in .obj format, exported from 3dsMax, the process of which is a 'black box' that I know very little about!

First attempt: TinkerCad; Freely available, and something I have used before, so of course I started here. Interesting how you always start from an area of most comfort! Unfortunately for my comfort levels, I very quickly found that TinkerCad was not all that great at dealing with 'large' or 'detailed' models, fantastic for the smaller stuff I have done in the past, but not a great prospect having to neaten all the holes and everything in the large detailed city models I had access to.


As you can see there are holes in the terrain, entire missing sections of roof and all kinds of things I would have to neaten up in 'post-processing', and given the scale of the data I was using, probably not going to be a great option moving forward in terms of scaling the project in future and ease of use. I also decided at this point, that we were going to have to 're-north' Melbourne before we brought it into Minecraft. As you can see, all the buildings are 'angled' and it just looks 'jagged' for lack of a better word.


This is the same Tinkercad import, only 're-northed' by around 21 degrees. It looks way, way nicer, but you can still see the missing parts are still very prevalent.


Next attempt: VoxtoSchematic; Adrian has a wealth of knowledge about MCEdit and a massive collection of filters for doing all sorts of crazy things with it. He pointed me in this direction: http://www.brightmoore.net/mcedit-filters-1/voxtoschematic so I began exploring. It required me to import the obj data into a voxel program, I used Magicavoxel (https://ephtracy.github.io/) to get started, because... free... and we were only exploring at this stage. This allowed me to 'paint' portions of the data a particular colour and then export for the VoxtoSchematic to work with and VoxtoSchematic would convert those colours to specific Minecraft blocks.

A truly brilliant piece of software by Adrian, and Magicavoxel is pretty darn cool as well, but the scale quickly became problematic, as I had no way to control what 'size' the model was converted to schematic at. EDIT: Found my notes (finally) and Magicavoxel also had a serious limitation on the number of blocks it could deal with, 126 blocks cubed. This was the first point where we were seriously looking at how much data is too much. We really needed to find a balance between software limitations, and also time limitations. As in, we could easily export each building individually, but stitching that much data together at the other end would be way too time intensive to be sustainable. The great thing about this process is that the models imported quite well, neatly, with way less of the fidelity loss we saw in TinkerCad. However, the scale was all over the place and I couldn't easily find a way to manage that effectively.


It was about this point that I understood enough about MCEdit to realise that no matter which path I took, it was going to play a big part in the creation of the final world, so we started to look at converting the 3D data directly to schematic format. I investigated FME, and some neat posts detailing how that could easily translate big data into Minecraft directly, but the path was just not clear enough, and neither me, nor the data company had the software, or the time to learn it to the depth it appeared we would need to complete that workflow, so we continued looking.

That, very quickly, summarises a couple of weeks worth of testing, trialing, researching, testing some more, researching some more and testing even more. In the next post I will share the workflow that worked for us, and our data. As always, thanks for reading, and if you have any comments, please feel free to leave them in the comments section below.

No comments:

Post a Comment