Monday, 29 July 2013

Darcy's mysteries

After I reconstructed a face of a skeleton called Joaquim, placed in Medicine History Museum (MUHM),  I was invited to do other job for a skull that belonged to the same donator.

When I traveled to Porto Alegre to talk in a conference (FISL 14), I took the opportunity to know Joaquim, do a TV interview, and see the other skull.

In the first view I didn't see anything different on the structure of the skull. I took it in my hands to a room and I took some photos and make a 3D scanning.

I took pictures from top and bottom to make a complete 3D scanning with PPT-GUI.

When I saw the skull, I imagined that it belonged to a woman. To have more safety I sent the 3D mesh to Dr. Paulo Miamoto, a forensic specialist to make a report about the sex of the individual.

To my surprise, the report was inconclusive. The protocol have a range o 1 to 5. 1 is a lot woman, and 5 is a lot man, the result was 2,4!

We take the opinion of other specialists and a half told that was a woman, a half that was a man.

Because this ambiguity, we starts to call the skull with a Portuguese neutral name: Darcy.

This was one of the misteries, the other appeared during the 3D modeling.

In the video above we have the process of the reconstruction. Apparently it doesn't hane anithyng different with the shape of the face.

When I put the skin, I noted that in the area of the top of the head I had to decrease the volume a lot.

When we see the two mesh side-by-side, Joaquim (an little man at left) and Darcy (at right), we can see a notorious difference at the top of the head.

The skull was submitted to a neurologist to be analyzed.

I don't have any knowledge not even for speculate about the result. We have to wait.

I hope you enjoy.

A big hug and see you in the next!

Saturday, 27 July 2013

Happy birthday ATOR! Two years of Open Research

The 27th of July is the "birthday" of ATOR and like last year in this day I would like to share some statistics about the progress of this experiment.
In one year, the number of active authors has increased from 6 to 13, while the posts reached the quote of 160 (79 last year). The reactions of the community led to 271 comments (96 of which were written in 2012). Currently (19:16 pm) the number of visualizations is 109447 (48899 visits since the activation of the Revolver Maps plugin) and we have 37 new members which, added to the 25 persons of 2012, bring the total number to 61 people.
As you see in the image below the main celebration for 2013 is the achievement of 100000 visits.

This short post is intended as a thanks for all the people composing the community of ATOR, readers and authors as well. 

Thank you for your posts, feedbacks and support! 

Your help was very important in improving and speeding up the research presented in this blog. Thanks to you we reached results which, initially, were not foreseen and in some cases ATOR gave birth to new methodologies that have become rapidly very popular in the scientific community . 
We hope to keep this trend also in the next year and to maintain an high quality level in the field of Open Research!

Thursday, 25 July 2013

WW1 - Documentation Project: New Data Aquisition Season

Finally, after a rainy spring, we are going to start a new campaign of data acquisition over 2150 m.a.s.l. along the WW1 front line. This time we will document a section of GUA10B (Grenz-Unter-Abschnitt) KAIII (Kampf-Abschintt) named Hahnspiel, a second line of austrian fortifications along the Dolomites frontline between May 1915 and November 1917.

The main innovation of this year will be the usage of our aerial drone (Naza DJI) in order to have an additional point of view in this mountainous and uneven terrain.

In addition to our traditional approach (GPS-survey, terrestrial structure from motion, geolocalized images and archeological description) we want to implemet data from aerial survey in order to create models of lager areas.
We hope to get through the summer without any crash :-) so that we could share our experience with you next autumn.

Wednesday, 17 July 2013

Forensic facial reconstruction of a living individual using open-source software (blind test)

Studying alone is often a good solution when one cannot find support or has no understanding of something new and exciting, albeit not appealing to the general public.

Still, when it comes down to evolve and adapt scientific knowledge to the benefit of human beings, there is nothing better than having around people with the same goals, motivated to devote towards a better world, more accessible to those who have interest in that certain area of knowledge.

Earlier in 2012 I began my studies in the field of forensic facial reconstruction. Now, a year and a half later, over forty reconstructions have gone by, mostly of modern humans, some hominids and even a saber-teeth tiger.

Over that time, in the lectures I taught, in the e-mails I received or courses I offered, people often questioned me about the precision of the method, whether had I tested it in skulls of known people (living or not).

Graph representing the precision of a reconstruction (in millimeters) in relation to the skin of the volunteer, obtained by optical scanning. The blue areas represent areas where the face was reconstructed deeper than the real face, while the yellow areas represent regions in which the real face was deeper than the reconstructed mesh.

I had already done some experiments, but for technical reasons and in order to not disclose the identity of volunteers, I did not publish them. Instead, I was limited to showing the work of great artists such as Gerasimov from Russia, Caroline Wilkinson from England and Karen T. Taylor from USA.

Fortunately, a few days ago, research partner Dr. Paulo Miamoto sent me a scanned skull at my request, so I could test a newly developed technique to "wear" the skin over the virtual muscles. This skull, sent without much background on it, but with permission for reconstruction by its "owner", would be the first opportunity I had to show a case of facial reconstruction of a living person, exposing the degree of accuracy that such works may reach.

Development of the Work

A few days ago, I began to test a series of Blender modifiers, seeking an option that would allow me to "wear" the skin over a reconstruction in muscle stage. The goal was to make the process faster, and therefore more accessible to those who wish to replicate it, whether one is gifted with artistic skills or not.

I managed to find a solution with a modifier called Shrinkwrap (and a number of adaptations), as seen in the video above. The skull shown on the video is from another reconstruction in progress. It may seem almost imperceptible to a layman in forensic facial reconstruction, but it is a "blessing" for those who are just starting to work on virtual sculpture.

Back to the skull previously provided by Dr. Paulo Miamoto, it offered me the possibility to reconstruct a living person that was only known to him. He asked me for help with the configuration of the skull, since he would have to "assemble" the structure, because the CT was acquired by a Cone Beam tomograph.

Usually a cone beam CT captures only a portion of a skull due to a reduced field of view of the hardware. It is and equipment widely used for dental purposes and it is usually cheaper than a medical CT scanner.

An interesting fact in this story is that the whole process was done with open-source software. Initially, Dr. Miamoto opened the scans in InVesalius and filtered the part that corresponded to the bones. For this step he used a tutorial that I wrote, explaining the basic operation of InVesalius (translated from Portuguese):

Then he imported the three parts in MeshLab and aligned them in 3D space so that the face of the skull part stayed structure. All steps of this process were done thanks to the tutorials available at Mister P’s channel on Youtube:

After aligning the meshes the skull was exported as a .ply file and sent with the following anthropological data for the iorientation of the reconstruction:

- Gender: Male;

- Ancestry: miscegenated xanthoderm (of Japanese descent) and caucasian (white);

- Age: 20-30 years.

Upon receiving the skull I had to simplify the mesh, because the reconstructed CT had generated some areas with significant noise, inherent to the technique of image capture of Cone Beam CT scanners. Then I rebuilt the area of the skull that was missing by aligning it with another skull from my database, as recommended by the authors of the area. Thus, the work would be done more easily, with more spatial references.

With the skull cleaned and properly positioned in the Frankfurt plane, the virtual pegs used as reference for soft tissue depth were placed and sketches of the projections of the nose and face profile were done. As Asian and Native American individuals share physical anthropological traits that makes their skulls, a soft tissue depth table for the native indians from southwestern South America (Rhine, 1983) was used.

To speed up the process, a whole set of muscles, cartilage and glands was imported from another file. Obviously some changes needed to be donein order to fit it to the studied skul.

Gradually, one by one, the muscles were deformed and adapted to the skull.

At the end all the elements were positioned and contrary to what many people think, even with all the muscles of the face it is hard to get an idea of how the final work will look like, once finished.

For the configuration of the skin, the work followed the same method used for the muscles. A kind of general template is imported from another file.

And adapted until it fits the shape outlined by the profile sketch, muscles and soft tissue depth pegs.

It is possible to visualize the progressive shape transformation suffered by the skin mesh.

By placing the skin and "wearing it" over the muscles, I suspected the skull belonged to Dr. Miamoto. The shape of the chin and the side view highlighted some features that are evident in photographs (I do not know personally Dr. Miamoto). Upon questioning him, since in this field on cannot work with uncertainty, he told me “yes, it is his skull”.

Needless to say I was extremely pleased with the result.

Then it would be the time to test the quality of the reconstruction in relation to the face of skull "owner".

A test was done with a photograph, in which the reconstructed mesh was put over it and viewed from the same point of view. Note that the lips almost lined up with the 3D model.

Dr. Paulo then did the same process to filter the skin from the CT and sent it to me in another .ply another file. The file was aligned with the reconstruction, showing a rather large compatibility.

Finally a optical scan of the Dr. Paulo’s face (done apart from the CT scan) was aligned to the reconstructed  face. Note that again the line of the lips was quite compatible, as well as the nose breadth.

 The data of the reconstructed mesh and optical scanning mesh were loaded on CloudCompare and a 3D compatibility graphic was generated. A significant part of the reconstructed mesh differed only a few millimeters from the optical scanned mesh.

The part in blue, comprising the cheeks traditionally differs from scannings of the living individual because the soft tissue depth table used as reference was done on cadavers that may have undergone a slight change in its shape (due to dehydration and action of gravity upon its record).

This was an example of how a facial reconstruction done with open-source software can provide a rather satisfactory degree of compatibility with the living individual, provided it fulfills the current and already validated protocols.

The use of new technologies and specific tools in Blender 3D contribute to a satisfactory degree of compatibility of expression lines of the face, thus making the process faster and easier for those who wish to perform a reconstruction but often do not have an art training background.

The findings of this study are currently being structured as a scientific article. I hope to publish them in a peer-reviewed forensic journal, so that the technical aspects of using exclusively open-source software for forensic facial reconstruction can be adequately exposed and disseminated among those interested in this field.


To Dr. Paulo Miamoto for the continued partnership on several fronts of research involving open-source computer graphics to forensic science (and to translate this article for a decent English, thank you!)

To the Biotomo Imaging Clinic staff from Jundiaí-SP: Dr. Roberto Matai and Dr. Caio Bardi Matai for the CT scan of the reconstructed skull.

To the Laboratoř Morfologie a Forenzní Antropologie team, from Faculty of Sciences at Masaryk University in Brno, Czech Republic: Prof. Petra Urbanová, MSc. Mikoláš Jurda, MSc. Zuzana Kotulanová and BS. Tomáš Kopecký, for access to the collection of skeletal material of the Department of Anthropology, aid in research of photographic technique for photogrammetry purposes and optical scans.

To the Laboratório de Antropologia e Odontologia Forense (OFLAB-FOUSP) team, from Faculty of Dentistry at University of São Paulo: Prof. Rodolfo Francisco Haltenhoff Melani and MSc. Thiago Leite Beaini for supporting the works in Brazil.

To the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES):  for granting a scholarship for Abroad Doctoral Internship Program (PDSE).

Thursday, 4 July 2013

Photomapping with Quantum GIS (Khovle method)

Hi everybody
Together with Alexamder Sachsenmaier and Alesandro Bezzi i found out a method to create a photomosaic just with QGIS. The problem was to export the single pictures in a good quality and in the size of the whole photomosaic not just the size of the single picture. But this works fine with the print composer of QGIS
So shortly:
1. edith the file of the ground control points to a .csv file
2. import the .csv file into QGIS (plugin is requiered)
3. change the design of the points
4. start the print composer and export the model with the points. Here its possible to set the dpi: e.g. for an area of 3x2m 500dpi gives a resolution of more or less 1mm
5. start the georeferencing plugin of QGIS and georeference the model
6. to export the wordfile from the geotiff of the model, type the following in the terminal
gdal_translate -co "TFW=YES" input_geotif.tif output_tif_tfw.tif
or open the model in OpenJUMP and close it again
7. georeference all the single pictures with QGIS
8. start the same print composer like the one for the model and export all single picures (dont move the pictures)
9. open the model in GIMP and import all single pictures as single layers
10. give the same name to the wordfile of the model and the photomosaic
Alessandro allready maked a videotutorial:

So, I hope this is helpful for somebody...
BlogItalia - La directory italiana dei blog Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.