Thursday, 13 April 2017

ROS and professional archaeology

It is a long time since we wrote something in this blog, but (like every year) the excavation season leaves us few time for research. For this reason, today I want to break our silence and show some results of our latest studies regarding archeorobotics (the use and development of robotic devices in archaeology).
If you are a regular reader of ATOR, you probably know that since 2012 we are working on optical sensor to achieve a real-time 3D documentation of archaeological evidences (or any kind of data we need to acquire during our projects). Since we started to work on different kind of drones (UAV, ROV, etc...), we discover the nice universe of ROS (Robot Operating System) and SLAM (Simultaneous Localization And Mapping) algorithms. In this post we summarized our research on this topic, focusing on the use of Kinect. Currently we already used this techniques on professional projects (like large scale surveys or excavations), adapting the system to work with RGB-D devices (in underground environment or during cloudy days) or stereocameras (with direct sun light conditions). For instance we helped our friend Cristian Boscaro of IUAV to test this technology in order to document the tunnels which connect the domes of the Abbay of S. Giustina in Padua. This evening I will post a video which shows a particular use of ROS and Kinect to solve a technical problem we had on the field today. We were working to assist the excavator in doing a trench for a pipeline near the Sanctuary of S. Romedio, in difficult logistic condition. Despite the absence of archaeological evidences, the Superintendence asked us to document the track of the trench, since often what is realize during the execution of this kind of work is different from what is planned in the map. Due to the fact that too few hours were left to accomplish a documentation with GPS and total station and that this strategy would have been pretty tricky (inside the gorge of the river S. Romedio) and not so accurate (for the scattering effect of the wood), we decided to use SLAM to get a real time 3D documentation of the track and later to georeference the result on the LIDAR data which the Autonomous Province of Trento releases freely. The video below shows the final result, which completely satisfies the (high) archaeological tolerance of this project.


That's all for today! Have a nice evening!

Wednesday, 8 March 2017

The story of a chestnut-eared aracari who received a 3D-printed beak


In 1987 my family moved to the municipality of Sinop, a small city in the north of Brazil (135,000 inhabitants). We came with the hope of finding a better life in the face of the difficulties spent at that time... and we have been here for 30 years! The time passed and the affection for this city solidified in my heart, one of the great dreams that I had was to see his name appearing in Brazil and in the world. With a lot of work and dedication I was able to make my small contribution to see this happen, but the most notorious fact was undoubtedly an unprecedented project that yielded until the visit of a European news agency in this municipality: the first prosthesis of aracari in the world printed in 3D!

Background

In August of 2015 the firemen of the city of Sinop, in the interior of Mato Grosso, received an unusual call. A bird very similar to the toucan was found in the forests of the region with the broken lower beak.

Sent to the Association of Rehabilitation and Reintroduction of Wild Animals (ARRAS), the aracari (P. castanotis) can find better conditions, keeping alive and fed by specialists in wild animals.

French TV story

The news of the animal with the broken beak was posted on a local website and addressed in its content the possibility of making a prosthesis to replace the damaged structure.

Coincidentally, less than a month before our team had created the carapace of the Freddy Turtle, using 3D technology. Because of this, a number of people sent me the link and asked if I could do anything about the aracari.

CBS TV - USA

I contacted the firefighter who had discovered the animal and he put me in contact with zootechnician Dr. Paula Andrade Moreira and veterinarian Dr. Vanessa Nachbar, both volunteers in the care of abandoned animals in the region.

After talking to the two of them, I went to the place where the bird was and I met him "personally". I discovered that his name was Tuc-tuc, and that if he was docile and had adapted well to living with humans, on the other site he still had a pair of well articulated beaks that pressed without mercy the unwary who took him to examine.

Chinese TV story

At first I tried to scan the beak through the photogrammetry technique, but the process proved almost useless against the precision we needed. At that time we still did not use the dental mold making. In addition to the difficulty of raising the volume of the beak, we would need to enable the placement of the prosthesis by the specialists of our team, but all were in the state of São Paulo and bring it to Mato Grosso would be a very expensive endeavor.

German TV story

The months went by and the hope was fading, until appear the possibility of TecnoFASIPE, a computer event idealized by the faculty where I teach Computer Graphics classes (I'm also a coordinator of TecnoFASIPE). One of the members of the team of animal prostheses, Dr. Paulo Miamoto was invited to lecture at the conference. I imagined that this could be one of the necessary steps for the Tuc-Tuc prosthesis, since Dr. Miamoto was responsible for the replication of the nozzles, an accurate technique that aided the digitization of these structures. In addition he also printed the prostheses and subjected to a treatment that increased resistance and made them more pleasing to the eye.

Faced with an emerging possibility, I activated the other members of the team and I decided to sponsor one of them coming to the city. He was Dr. Sergio Camargo, respected Veterinarian, surgeon and specialist in spouts. Another member chose to come to the city on his own, Dr. Rodrigo Rabello, also a veterinarian and surgeon.

Brazilian (in Portuguese) TV story

In the meantime, I coordinated with the SEMA (a Brazilian agency that takes care of animals) specialist, Sandro Depiné, the documentation necessary for the surgical procedure to take place. I also had the honor to meet Prof. Dr. Elaine Dione, Veterinary anesthesiologist, representative of the UFMT Veterinary Hospital in Sinop.

To close the initial coordination process, I closed with a French news agency to follow the placement of the prosthesis, ie the event would yield an international story!

Creation process and surgery

Taking advantage of the arrival of the French, I chose to insert the participation of local experts in the design of the beak. I contacted two well-known dentists in the city, Dr. Paulo Bueno and Dr. Bruno Tedeschi. For 3D printing I got the support of a former student named Cristian Saggin. I still got a sponsorship for the stay of one of the specialists, thanks to the generosity of a local company, the Centauro Systems.

It was insane work, at the same time as I recorded with the staff, I also helped coordinate the project. All the steps went well, we were able to replicate the prosthesis making technique here in Sinop, in the interior of Mato Grosso, just as we did in São Paulo.


The first step was to create a replica of the beak. The structure was immobilized and a negative was generated, without the need to sedate the animal. This negative was filled with plaster and then destroyed for the retort to be removed. The replica received a series of tracings for the photogrammetry algorithm to digitize it accurately.

The replica of the beaks made in three stages: 1) Full beak (closed to have an idea of the structure), 2) Top nose (rhinoteca), 3) Lower nozzle or gnatoteca.

This is necessary, cause  we align the prosthesis in relation to the nozzle with the complete structure. By the way, the prosthesis is created using as reference a donor, in this case, the corpse of an aracari of the same species.

The surgery was performed on April 17, 2016, successful, allowing the animal to feed naturally as soon as recovered from anesthesia.

Unfortunately not everything went as expected. A macaw, Gisele, who would also receive a prosthesis did not withstand some complications during surgery and died. I was very sad at that moment and this feeling was captured in the story edited by the French. Even so, we learned a lot from the process and even though the macaw has gone... it certainly helped a lot in the history of making veterinary prostheses.

Repercussion and recognition

The procedure had wide repercussion, both in Mato Grosso and internationally. The material edited by the French reporter Zinedine Boudaud was broadcast on the open TV of France and Germany. The CBS TV also presented material, the most curious being that we even heard about Tuc-tuc.

French TV story: http://sites.arte.tv/futuremag/fr/animaux-bioniques-futuremag

German TV story:  http://sites.arte.tv/futuremag/de/bionische-tiere-futuremag

CBS TV: http://www.insideedition.com/headlines/15976-toucan-found-injured-on-roadside-gets-new-3d-printed-beak

Matéria em chinês: https://v.qq.com/x/cover/sgkr3aihie62aiz/f019597c45k.html

Portuguese TV story: http://g1.globo.com/mato-grosso/mttv-2edicao/videos/t/edicoes/v/equipe-desenvolve-protese-com-impressora-3d-e-salvam-vida-de-aracari-em-sinop/4981097/

Portuguese text story: http://g1.globo.com/mato-grosso/noticia/2016/04/ave-abandonada-com-bico-mutilado-recebe-protese-de-impressora-3d.html

The site of the University of Darmstadt in Germany, the institution that developed the algorithm used in the scanning of the beak made a post about the prosthesis of the aracari: https://www.informatik.tu-darmstadt.de/de/aktuelles/neuigkeiten/neuigkeiten/artikel/schnabel-aus-dem-3d-drucker-mit-technologie-aus-darmstadt/


As soon as the work was finalized, we received of a motion of applause offered by Sinop's City Council, a project of the councilor Fernando Brandão.

For all of us, it was a tremendous honor, to see Sinop emerging in the world and bringing good news, tied to scientific scholarship.

Acknowledgements

We thank all those who helped in the project, enabling it and allowing one more life to find fulfillment.

Roberto Fecchio (Veterinarian, leader of the Avengers), Profa. Dr. Elaine Dione (Postdoctoral Veterinary Surgery with Emphasis in Veterinary Anesthesiology), Fátima Escalabrin (Psychologist), Sandro Depiné (Sema), Anderson Eduardo Wagner (Green Action Institute), Cris Cesco Diel (Biologist, Specialist in Environmental Law And Sustainable Development), Ailton Santiago (Forest Park), Prof. Dr. Paula Moreira (Zoothecnist, PhD in Biological Sciences in the Animal Behavior Area), Dr. Vanessa Nachbar (Veterinary Medicine), Cristhian Saggin, Dr. Everton da Rosa (Bucomaxillo Facial Surgeon) Dr. Paulo Bueno (Dental Surgeon),Dr. Bruno Tedeschi (Dental Surgeon), Raissa A. Chagas Martins (Veterinary Medicine), Dr. Luiz Fernando Bianchini Venâncio (Veterinary Medicine), Patricia Ribeiro Barroso (Veterinary Medicine), Dr. Raquel Giachini Dr Rodrigo da Costa (Veterinarian), Lis Caroline de Quadros Moura (Zootecnista), Deivison Pinto (President Fasipe President), Adriano Barreto (ADS Course Coordinator - Fasipe), Klayton Conçalves, Cesar Rosenelli, Rodrigo da Costa, Jamerson (Reporter), Desirêe Galvão (Reporter), Andressa Godois (Reporter), Zinedine Boudaoud (Reporter), Melice Losso (Reporter), Laércio Romão (Reporter), Edneuza Trugillo and Luíza Trugillo.

Tuesday, 28 February 2017

The 3D facial reconstruction of Saint Valentine, the patron saint of lovers!


Follow the details about the facial reconstruction of Saint Valentine, the patron saint of lovers who had his face revealed and whose news was published in 32 languages!

Project coordination and initial data capture: Dr. José Luís Lira
3D digital scanning, recovery and digital facial reconstruction: Cicero Moraes
Forensic consulting: Dr. Marcos Paulo Salles Machado

3D printing: CTI Renato Archer
Painting on the 3D bust: Mari Bueno

Background


I met Dr. José Luís Lira, still in 2014, on the occasion of the presentation of St. Anthony's face. As a hagiologist, a specialist in the life of the Saints, he introduced me to a series of relics around the world that could be reconstructed.

The first result of our partnership was the reconstruction of the face of St. Mary Magdalene in 2015, made from her supposed skull, present in the Basilica that bears the name of the saint in the French city of Saint-Maximin-la- Sainte-Baume.

Months later, almost at the end of the year, we present another result, the facial reconstruction of Santa Paulina.

The reconstrucion was news in 32 languages. See the details here.

Dr. Lira has always had free transit among religious, even because he studied in a seminary to be a priest, although after he left it, graduated and built his professional career in the legal area (academically is a university professor, Ph.D. in Law from the Università degli Studi di Messina, Italy). In addition to the connection with the Catholic Church, he published several books on saints and also belongs to the Equestrian Order of the Holy Sepulcher of Jerusalem, whose origins date back to the First Crusade when its leader, Godfrey of Bouillon, freed Jerusalem. It is currently an International Public Association of believers, with canonical and public juridical personality, constituted by the Holy See and the Holy Father is responsible for the Order (Source: Site of the order).

Chinese TV story

To this day I had worked with the facial reconstruction of seven saints (Anthony of Padua, Mary Magdelene, Saint Sidoine of Aix, Rose of Lima, Martin of Porres, Jhon Macías, and Paulina Visintainer) and two beats (Luca Belludi and Ana de los Ángeles Monteagudo). From the beginning of our partnership we plan to write a book about these works, treating both sides of the coin, the technical and the religious.

Ukrainian TV story

On the occasion of the reconstruction of the face of St. Mary Magdalene we had the opportunity to exercise our aspirations, as we wrote and published a book about the works with the saint.

We came in contact with a number of relic-bearing churches in order to rebuild the face of all of them. However, contacting via the internet has not always proved to be a very effective option, either because of the high number of e-mails ignored, the only “OK” after many attempts (Mary Magdelene) or the categorical "no".

Vietnamese TV story

Among the series of unanswered emails was the one sent to the Basilica of Santa Maria in Cosmedin, located in Rome, Italy. According to our studies, in that church there was a skull belonging to St. Valentine Martyr.

Turkish TV story

Many sources attribute this relic to the religious who lived between 170 and 270 AD. C. and that gave rise to the day of the lovers, but the history of the saint is surrounded in a lot of mystery, as it quotes the own magazine História Viva (Brazil):

"However, the whole saga of the martyr is uncertain. There are at least three religious under the name of Valentine, two of them buried in Rome and a third who would have been killed in Africa. The Catholic Church itself, in 1969, stop celebrating the saint's birthday by considering its origins - and even its existence - uncertain."


In our research the only skull that appears is precisely this, one of the three mentioned in the text and one of the two that have the buried bodies in Rome.

Important information: When searching for "St Valentine skull" in Google image, the results returned only show the skull used in this reconstruction work.

Screenshot with search result in Google Images
When traveling to that city, during the canonization of Blessed José Sánchez del Río, Dr. Lira decided to visit the basilica where the remains of the saint (Valentine) were located. Perhaps, conversing personally with those responsible for the guard, the chance to rebuild the face was greater. As he related, on the thirteenth, he thought of going to Venice, but at dawn he had severe pains and cramp in his right leg, he decided to stay in Rome, because, besides everything, it was raining. Staying very close to the Vatican, he went to St. Peter's Square to see the preparation for the canonization and thinking he could go "in search" of St. Valentine. After lunch in the vicinity of San Pedro, he returned to the Square and to surprise, a lady who sold holy images offered him one of Saint Valentine, with the same image that is stained glass of the Basilica of Terni. Excited, he decided to go to the basilica where the saint's relic was.

Holy image received by Dr. Lira

Work in loco


On arriving at the Church on October 13, 2016, Dr. José Luís Lira first walked around the Church, took some photos and in the sacristy explained to the secretary of the Rector of the Basilica the reasons for the visit and, after waiting, Was attended by Fr. Mtanious Hadad, the Rector himself. After a meeting, it was agreed that the hagiologist would come back the other day, the 14th, to take pictures of the skull at 11 am.

Dr. José Luis Lira photographing the relic of St. Valentine in Rome, Italy
At 11 o'clock on the morning of October 14, 2016, Dr. José Luís Lira had access to the altar where the relic of Saint Valentine is found and spent about 40 minutes photographing it. So that the photos could be made with tranquility the main door of the Basilica was closed, following the determination of the Father Hadad, nevertheless, the faithful could have access by the other entrances. Some even photographed him doing the work.

An interesting fact about Dr. Lira's participation in data collection is that he had never done this before. The photographs are usually taken by a person trained in the technique and to prevent something from going wrong he proceeded with the works doing 251 photos of the reliquary!

Term of authorization for facial reconstruction
Some of the photos used in the photogrammetry process
From these images were selected 35 for the digitization of the skull and 9 for the reconstruction of the reliquary, which served as a reference scale, since its measurements were taken by Dr. Lira at the time of the photographs.

Previous digital work


This phase of the work was developed by me and includes everything from the scanning of the skull to digital facial reconstruction.

Three-dimensional digital model scaled in 1: 1 scale
Two scans were required, one focused on skull scanning and another scanning the reliquary structure, which would serve as a reference for scale, since the forogrammetry technique does not automatically resize the objects. The dimensions of the area with the glass were of 20x36 cm, according to the measurements of Dr. Lira. To scan the skull the system used was the Recap360 from Autodesk® and to scan the reliquary was chosen by Agisoft® Photoscan©. The positioning of the cameras in 3D was achieved through the PPT-GUI software.

Region of the skull scanned in 3D without texture (color)
In order to continue with the work of recovering the missing parts of the skull, the initial files of the scan were sent to the IML expert from Rio de Janeiro, Dr. Marcos Paulo Salles Machado, who analyzed the material and inferred that was a male. It is important to note that Dr. Machado received the files with the text "Valentine" hidden, and without any preliminary information about the skull, so that the analysis was done blindly.

Remaining digital vs. Recovery of the skull in various points of view
The 3D scanning algorithm, though very good, was able to reconstruct only one region of the skull. Partly because the camera used was in automatic mode and did not balance the illumination very well and partly because the skull was packed in a reliquary that significantly limits the scanning of the entire surface by the photos.

To recover the missing part, I used a skull from my 3D digital collection. The piece was chosen with the most compatible form the anatomy of the relic. Some adaptations have proved necessary until the "digital remnant" is coupled. It was necessary to exclude, in the other skull, the region that coincides with the remainder so that the model was unified in only one object.

Photography vs. Imposition of the recovered digital skull (with jaw)
A fim de recuperar as partes faltantes respeitando o volume real do crânio, as referências das câmeras virtuais, posicionadas através do PPT-GUI, foram usadas nas 9 fotos mais distantes e nas 35 fotos mais próximas. Com esta característica a anatomia foi reconstruída com pequenas intervenções manuais, recuperando a região que não foi automaticamente digitalizada pelo processo de fotogrametria.

Reconstrução facial 3D digital


Com os dados do gênero (masculino), ancestrais (europeu) e faixa etária (+ 55) fornecidos pelo Dr. Marcos Paulo Salles Machado, pude iniciar a reconstrução facial 3D.



Before any modeling work, it is necessary to place the soft tissue thickness markers. These are pins with different heights at different points of the skull. These heights correspond to the thickness of the soft tissue (skin, muscles, etc.) at the points where they were fixed, based on a mean raised from a population mean, in this case, we chose data acquired in the measurement of hundreds of individuals Male, with European ancestry and over 55 years old. More details about this process can be found in the free ebook "Digital Facial 3D Reconstruction" (in Portuguese) through this link.

When tracing a line within the limits of the soft tissue thickness markers, and respecting the nasal projection, we have a basic outline of the face profile.


The main facial muscles are then modeled to aid in reconstruction with an anatomical parameter.


Once the muscles are set up, the next step is to proceed with a low-resolution digital sculpture, so as to shape a base of the face, using as reference the muscles and especially the soft tissue thickness markers. The basic sculpture is finished when the markers are all hidden by the mass corresponding to the soft tissue.


The region of the face undergoes a process called retopo, where a more organized 3D mesh involves the base sculpture and receives more data, such as the ears and muscles.


The 3D mesh is pigmented by projection of mapping and digital painting, to give the colors of the face.


The 3D mesh is pigmented by projection of mapping and digital painting, to give the colors of the face.


The next step is to implant hair and the beard digitally, in order to finalize the composition of the face, according to the age group.


Finally the reconstruction is finished, with the placement of the clothing.


The properly pigmented, dressed and haired model is then finalized. It goes through a process called rendering, which provides light, shade and more quality to the image.

According to remarks on the clothing punctuated by Dr. Lira, St. Valentine uses the pallium of current use of the cardinals and archbishop, since it was common at the time being larger than the current. The image is in a tunic, an official liturgical vestment of every priest during the celebrations he presides over. The tunic is a white robe and hides the individuality of the priest, so that the Christ presiding over the Sacrifice can be perceived in him. The tunic reminds us that the priest who, before being ordained was baptized with Christ, now symbolically covers the new man (to preside over the Eucharistic Sacrifice). Because it is a holy martyr, it was decided to use the red color for a kind of chasuble, a solemn vestment proper to the priest (deacon can not use it), which has no seam on the sides and is used in Sunday Masses and days Holidays. Martyrdom is the most primitive form of recognition of the sanctity of a Christian, and therefore a reason for celebration, so we chose this dress, defined by the expert who knows the subject.


An infographic is generated so that the interested ones can understand how the work was developed and sent to the Father Mtanious Hadad, who gave authorization for the facial reconstruction, for its appreciation.

Media Projection

The reporter Janet Tappin Coelho, like other opportunities, was chosen by the team to write exclusively on the process of facial reconstruction of the Saint. On February 13, 2017 the first report was published on the website of the British newspaper Daily Mirror and on the 14th in the The Mirror. Quickly the news won the world being replicated, so far, in 32 languages!

Conclusion

Coincidentally the facial reconstruction was finalized on January 14, exact three months after the start of the project and a month before the feast of the Holy Patron Saint Valentine.

For all who participated in this endeavor the result was fantastic. This is the tenth Catholic figure we had the honor of rebuilding and now we have a good basis to continue with the project, both in our written work and in the sequence of this job, whose next steps are the 3D impression of the bust (CTI Renato Archer) and later the painting done by the artist Mari Bueno.

Coming soon!

Monday, 30 January 2017

Digitizing the excavation

The 21st Conference on Cultural Heritage and NEW Technologies (CHNT 21, 2016) took place in Vienna  the first week of November 2016. In that occasion we gave a presentation entitled "Digitizing the excavation. Toward a real-time documentation and analysis of the archaeological record". Today I found the time to publish it in our blog, to share our research regarding this topic and in particular some interesting projects of "archeorobotics" we are working on.
Here below you can see the video of the presentation, done like always with the open source software impress.js and Strut...



... and here is a short description of each slide:

SLIDE 1

The title (strictly related with Digital Archaeology in general)

SLIDE 2

A short presentation of Arc-Team

SLIDE 3

All the work has been done thanks to Free/Libre and Open Source Software. In order to keep going on with our research regarding archaeological methodology we need the source code!

SLIDE 4

The fundamental schema of the archaeological cognitive process elaborated by G. Leonardi in 1982. The schema shows the progressive reduction of the informations regarding human actions before and during the archaeological excavation (Human activities --> Traces on the soil --> Natural and anthropological degradation of the record --> archaeological excavation --> archaeological documentation) until the interpretative knowledge starts recover information during the post-excavation stage (with analitical data interpretation and reconstructive hypothesis)

SLIDE 5

A practical example of the schema from the site of Torre dei Sicconi in Italy (a medieval castle):
1. Human activities (summarized in the building of the castle, the medieval battle and the destruction of the main structure and the controlled explosion during the Great War)

2. Traces on the soil (summarized in the evidences of the battle, of the controlled explosion and of recent agrarian activities, while just negative layers were found regarding the construction of the structure)

3. Natural and anthropological degradation (summarized in the battle, the explosion, the agrarian activities and the normal natural dynamics)

4. Archaeological excavation (the most destructive investigation: in Torre dei Sicconi all the layers concerning the tower and the main central building has been removed by this activity)

5. The importance of archaeological documentation comes from distructive analysis (excavation). Being a long term project, Torre dei Sicconi was documented both with traditional and digital methodology

6. Data analysis. During this stage our knowledge of the site started to grow again. In this case both archaeological and historical techniques have been used

7. Reconstructive hypotheses represent the maximum increase of our (interpretative) knowledge of the site. For Torre dei Sicconi this stage has been achieved just for the central part of the castle (tower and main building)

SLIDE 6

The archaeological excavation is the most critical (destructive) stage of our knowledge regarding a site.

SLIDE 7

Arc-Team's excavation strategies:
1. increasing the amount of information registered decreasing the time-consuming operation of archaeological documentation
2. on-site direct observation for a better interpretation, avoiding at the same time any kind of data selection
3. moving the lab into the field (chemical and physical analyses)

SLIDE 8

A milestone of our research: in 2006 the development of the "Metodo Aramus" gave us a better (more precise and accurate), faster and corect (equalized) 2D digital documentation with FLOSS.

SLIDE 9

Another milestone. Between 2008 and 2009 the migration from pure photogrammetric software to SfM and MVSR methods (through the development of a GUI for +Pierre Moulon's application  Python Photogrammetry Suite) gave us better and faster 3D digital documentation

SLIDE 10

Even today we still use a combination of 2D and 3D techniques to meet different requirements of various archaeological projects

SLIDE 11

2D digital documentation through GIS is fast enough for on site interpretation during emergency excavation

SLIDE 12

A software like +QGIS  allows a direct interpretation on the field without the necessity of long post-rpocessing

SLIDE 13

3D documentation gives better results, but needs longer processing time (even if it does not need long data acquisition on the field, which is always performed)

SLIDE 14

We achieved (a lower quality) 3D data acquisition which has the fundamental characteristic of being real-time, thanks to open hardware (archeorobotics)
SLIDE 15
Our experience in archeorobotics dates back to 2006 with our first prototype of UAV, which could be use professionally just in 2008.

SLIDE 16

Currently or archeorobotics research regards our last prototype of Archeodrone (a UAV specifically designed for aerial archaeology)...

SLIDE 17

... some CNC machines and, above all, the Fa)(a 3D, a 3D open hardware printer which without any kind of modifications was able to satisfy our archaeological needs (like 3D printing casts of unique finds or exctract and print DICOM data form x-ray CT scan)...

SLIDE 18

... and the ArcheoROV, the open hardware Remotely underwater Operated Vehicle which we developed with the +Witlab Fablab 

SLIDE 19

Some pictures of the first test of the ArcheoROV

SLIDE 20

A first step into 3D real-time documentation through SLAM (Simultaneous Localization and Mapping) techniques has been done with the open source ROS (Robot Operating System) and RTAB-Map via Kinect...

SLIDE 21

... and tested for 3D real-time documentation in wooden areas (where SfM and MVSR or laserscab would have been too slow), reaching in almost one hour of work a model (with real dimension) of 75000 points.

SLIDE 22

A benefit of archaeorobotic system like these (which are ROS capable) is the possibility to change the sensor in order to adapt the hardware to different situation, using monocular or stereo cameras (for odometry) as well as LIDAR or SONAR devices.

SLIDE 23

Another benefit is the wide range of possibilities offered by the different open source software (e.g. RTAB-Map, LSD-SLAM, REMODE, Cartographer, ecc...)

SLIDE 24

Currently the precision/accuracy level of a real-time 3D archaeological documentation cannot be compared with the results achieved with post-processing through traditional SfM - MVSR systems, but there are good prospects for improvement.

SLIDE 25

Nowadays, basing on our professional experience, the best use of such devices seems to be during extreme operations, such as high mountain archaeology, glacial archaeology, underwater archaeology or speleoarchaeology

SLIDE 26

Another important step to improve the reaction time of professional archaeology, in order to avoid errors during the critical stage of the excavation, is the possibility to perform some basic archaeometrical analyses (chemical and physical) directly on the field.

SLIDE 27

Considering the composition of any archaeological layer based on two different elements, the skeleton (macroscopic) and the fine earth (microscopic), it is obvious that different analyses can be performed in different work environment.

SLIDE 28

For instance, in the case of the skeleton, a fast petrografic (ontoscopic) analysis can be easily performed directly on the field (defining allogeneic elements), while further (more specific) investigations need an equipped laboratory.

SLIDE 29

Also in the case of fine earth, some raw descriptive analyses can be performed on the field, while laboratory investigation can reach very detailed results (e.g. with the Scanning Electron Microscope).

SLIDE 30

The field analysis of the fine earth is more problematic (compared with the skeleton) the most common test (e.g. the Soil texture by feel) are anametric and subjective
SLIDE 31
For this reason, archaeometric test are the better choice (e.g the sedimentation test)

SLIDE 32

The sedimentation test on the field can be improved with basic physical analysis (e.g. considering the Stoke's Law in order to define sand, silt and clay by the tme they need to sediment)

SLIDE 33

Another implementation on the field for the sedimentation test is the possibility to directly store the data into a PostreSQL/PostGIS database (through some specific fields of the archaeological recording sheet), using the open source application geTTexture.

SLIDE 34

An example of the use of geTTexture

SLIDE 35

Other archaeometric test which are simple to perform directly during the excavation are based on basic chemical analyses, and specifically with the quantification of compounds like phosphates or nitrates.

SLIDE 36

Moreover, with some simple workarounds, it is possible to turn anametric (boolean) analyses of carbonates or organic substances, into metric (quantitative) observations.

SLIDE 37

The Archaeological excavation is a destructive process, subject to fatal (not reversible) errors. Moreover the reduced time and budget in professional and emergency archaeology increase stress conditions during decision making stages.
Real-time 3D mapping can speed up data interpretation, avoiding data selection on the field, while on-site chemical and physical analyses (geoarchaeology and archaeometry) can define a better (data-driven) digging strategy.


I hope this presentation can be useful. Have a nice day!

Sunday, 29 January 2017

Gufan, the 2000 year old Brazilian


Background


In 2013 I visited the Paranaense Museum with Dr. Moacir Elias Santos. At that time I was in Curitiba to present the face of an Andean mummy, on the occasion of the II Happy Mommy's Day.
Panel printed with Gufan's facial reconstruction process - Photo: Karen Becker
Dr. Moacir had told me that I would be surprised by the rich collection of the museum. In fact I was surprised, every room I could see pieces and more pieces, which together made up a historical panel, not only of the state of Paraná, but of Brazil and even of other countries.


TV story about the facial reconstruction of Gufan and the use of virtual reality


After dazzling myself with old vestments, pictures, coins and infographics, we arrived at a room where the bones of an abogirinal child of a few hundred years were being presented.

I wasted no time and took a series of photographs of the skull, already with the intention of digitizing it in 3D and later reconstruct it.

As soon as I returned to Mato Grosso, that's exactly what I did. I showed the work to Dr. Moacir and he appreciated it, but he asked me to contact those responsible for the museum so that they would know about the work I was doing, after all, I had not agreed with them to use the pictures.



I called the museum, explained the situation and the clerk transferred me to Dr. Claudia Parellada. Undoing my initial fears, which foresaw a future dominated by coercion, she was interested in the idea of reconstruction and not only allowed me to post the work on my site, but also raised the possibility of building a partnership, since she relied on others skulls, some of them over a thousand years old.

The facial reconstruction project


The story does not stop there. In 2008 I traveled to Curitiba for the first time at the invitation of my friend Alessandro Binhara, to lecture on Blender and computer graphics at the educational institution he was working on. The talk was given and we agreed to one day we would work on a project together.

Steps of facial reconstruction

Nine years passed and the opportunity appeared. I closed an in-house workshop with Mr. Binhara, Beenoculus staff, and my other buddy, the developer Sandro Bihaiko. The plan was to bring together a number of experts and study some applications using virtual and augmented reality.

In the meantime I realized that it was a good opportunity to resume the discussions with the staff of the Paranaense Museum and I went back to talking with Dr. Claudia Parellada and Dr. Renato Carneiro, director of the institution.


I learned then that they had a rich collection of skulls, and among them was Gufan, a 2000-year-old proto-Jê autochthon. The name Gufan comes from the Kaingang language and means "ancestor". For the integrity of the anatomical piece she proved to be the most apt to be reconstructed.

Dr. Parellada and Dr. Carneiro collected all the data about Gufan, as well as sent me a series of photos that served as a basis for 3D scanning by the photogrammetry technique. Shortly afterwards I had the skull digitized and the reconstruction work started.

Facial reconstruction


The process of facial reconstruction went smoothly with nothing new in relation to the other works. Starting with the positioning of soft tissue thickness markers, I then went through digital sculpture, retopo (simplification of the mesh), mapping and pigmentation, and finally the placement of hair and generation of images.

The base of facial texture
It must be documented that I received the mapping references with an international flavor. My friend Santiago González photographed one of his students in Lima, Peru and sent a series of images to be used at work. I take this opportunity to thank him and the student!

I had to resort to this solution because here in my city I did not have any individuals with indigenous traits to take pictures. I thought about it a little and turned to my Peruvian friends, since in that beautiful country, a considerable part of the population carries the appearance of its historic and warrior people.

The Virtual Reality


With the face of Gufan reconstructed I traveled to Curitiba where I would meet with the team to carry out our project. The works took place at the premises of Beenoculus, a virtual reality glasses assembly company and interactive content.

The excitement was so great that our workshop was just about creating a presentation for Gufan. Beenoculus donated a state-of-the-art goggles, my friend Binhara came in with cutting-edge machinery, a generous video card for the application to roll without choking, and Sandro Bihaiko wrote the application with the help of local officials.

While the presentation was developed on one side, we moved to the Paranaense Museum to see if everything was right with the space where the revelation would be held. A panel was assembled illustrating the stages of facial reconstruction, we talked about the distribution of the elements and seats and everything was right, just wait for the big day.

The face presentation

The presentation of the face of Gufan was held on January 24, 2016. Initially we expected 20 to 30 people, but I articulated a rapprochement with the press in order to supplant that number without much pretension, of course.



Before traveling to Curitiba I composed a release with the digital technology personnel and the management of the Paranaense Museum. I also telephoned several TVs and newspapers in the city and soon faced the biggest newspaper (Gazeta do Povo) and the biggest TV (RPC, Globo) showed interest in the agenda. The result of all this has been translated into two newspaper covers and a 7-minute story with two live insertions in the midday issue of January 24.

And during the presentation, instead of 20 or 30 people came 170 according to the organizers! A lot of people had to attend the two lectures standing. Total success!

Acknowledgment


I just have to thank everyone who made this possible: Claudia Parellada, Renato Carneiro, Alessandro Binhara, Sandro Bihaiko, Anelise Daux, Junior Evangelista Terrabuio, Rawlinson Terrabuio, Matheus Dalla, Victor Ullmann, Amilton Binhara, Adelina Binhara, Lucas Gabriel Marins, Durval Ramos, Angieli Maros, Fernanda Fraga, Keyse Caldeira, Caroline Olinda Everton da Rosa e  Karen Lisse Fukushima.

Not forgetting to mention the companies and institutions involved: Paranaense Museum, Azuris, Beenoculus, State Secretary of Culture of Paraná, Government of Paraná, Arc-Team Italy and all the press.

I hope from the bottom of my heart that this partnership continues and that good news the future holds. A big hug and thank you for reading!

Wednesday, 28 December 2016

The devils boat

This year, thanks to Prof. Tiziano Camagna, we had the opportunity to prove our methodologies during a particular archaeological expedition, focused on the localization and documentation of the "devils boat". 
This strange wreck consists in a small boat built by the Italian soldiers, the "Alpini" of the battalion "Edolo" (nicknamed the "Adamello devils"), during the World War 1, near the mountain hut J. Payer (as reported by the book of Luciano Viazzi "I diavoli dell'Adamello"). 
The mission was a derivation of the project "La foresta sommersa del lago di Tovel: alla scoperta di nuove figure professionali e nuove tecnologie al servizio della ricerca” ("The submerged forest of lake Tovel: discovering new professions and new technologies at the service of scientific research"), a didactic program conceived by Prof. Camagna for the high school Liceo Scientifico B. Russell of Cles (Trentino - Italy).
As already mentioned, the target of the expedition has been the small boat currently lying on the bottom of lake Mandrone (Trentino - Italy), previously localized by Prof. Camagna and later photographed during an exploration in 2007. The lake is located at 2450 meters above see level. For this reason, before involving the students into such a difficult underwater project, a preliminary mission has been accomplished, in order to check the general conditions and perform some basic operations. This first mission was directed by Prof. Camagna and supported by the archaeologists of Arc-Team (Alessandro Bezzi, Luca Bezzi, for underwater documentation, and Rupert Gietl, for GNSS/GPS localization and boat support), by the explorers of the Nautica Mare team (Massimiliano Canossa and Nicola Boninsegna) and by the experts of Witlab (Emanuele Rocco, Andrea Saiani, Simone Nascivera and Daniel Perghem).
The primary target of the first mission (26 and 27 August 2016) has been the localization of the boat, since it was not known the exact place where the wreck was laying. Once the boat has been re-discovered, all the necessary operations to georeference the site have been performed, so that the team of divers could concentrate on the correct archaeological documentation of the boat. Additionally to the objectives mentioned above, the mission has been an occasion to test for the first time on a real operating scenario the ArcheoROV, the Open hardware ROV which has been developed by Arc-Team and WitLab.
Target 1 has been achieved in a fast and easy way during the second day of  mission (the first day was dedicated to the divers acclimation at 2450 meters a.s.l.), since the weather and environmental conditions were particularly good, so that the boat was visible from the lake shore. Target 2 has been reached positioning the GPS base station on a referenced point of the "Comitato Glaciologico Trentino" ("Galciological Committee of Trentino") and using the rover with an inflatable kayak to register some Control Points on the surface of the lake, connected through a reel with strategical points on the wreck. Target 3 has been completed collecting pictures for a post-mission 3D reconstruction through simple SfM techniques (already applied in underwater archaeology). The open source software used in post-processing are PPT and openMVG (for 3D reconstruction), MeshLab and CloudCompare (for mesh editing), MicMac (for the orthophoto) and QGIS (for archaeological drawing), all of them running on the (still) experimental new version of ArcheOS (Hypatia). Unlike what has been done in other projects, this time we preferred to recover original colours form underwater photos (to help SfM software in 3D reconstruction), using a series of command of the open source software suite Image Magick (soon I'll writ  a post about this operation). Once completed the primary targets, the spared time of the first expedition has been dedicated to secondary objectives: teting the ArcheoROV (as mentioned before) with positive feedbacks, and the 3D documentation of the landscape surrounding the lake (to improve the free LIDAR model of the area).
What could not be foreseen for the first mission was serendipity: before emerging from the lake, the divers of Nautica Mare team (Nicola Boninsegna and Massimiliano Canossa) found a tree on the bottom of the lake. From an archaeological point of view it has been soon clear that this could be an import discovery, as the surrounding landscape (periglacial grasslands) was without wood (which is almost 200 meters below). The technicians of Arc-Team geolocated the trunk with the GPS, in order to perform a sampling during the second mission.
For this reason, the second mission changed its priority an has been focused on the recovering of core samples by drilling the submerged tree. Further analysis (performed by Mauro Bernabei, CNR-ivalsa) demonstrated that the tree was a Pinus cembra L. with the last ring dated back to 2931 B.C. (4947 years old). Nevertheless, the expedition has maintained its educational purpose, teaching the students of the Liceo Russell the basics of underwater archaeology and performing with them some test on a low-cost sonar, in order to map part of the lake bottom.
All the operations performed during the two underwater missions are summarized in the slides below, which come from the lesson I gave to the student in order to complete our didactic task at the Liceo B. Russell.



Aknowledgements

Prof. Tiziano Camagna (Liceo Scientifico B. Russell), for organizing the missions

Massimiliano Canossa and Nicola Boninsegna (Nautica Mare Team), for the professional support and for discovering the tree

Mauro Bernabei and the CNR-ivalsa, for analizing and dating the wood samples

The Galazzini family (tenants of the refuge “Città di Trento”), for the logistic support

The wildlife park “Adamello-Brenta” and the Department for Cultural Heritage of Trento (Office of Archaeological Heritage) for close cooperation

Last but not least, Dott. Stefano Agosti, Prof. Giovanni Widmann and the students of Liceo B. Russel: Borghesi daniele, Torresani Isabel, Corazzolla Gianluca, Marinolli Davide, Gervasi Federico, Panizza Anna, Calliari Matteo, Gasperi Massimo, Slanzi Marco, Crotti Leonardo, Pontara Nicola, Stanchina Riccardo


Tuesday, 27 December 2016

Basic Principles of 3D Computer Graphics Applied to Health Sciences


Dear friends,

This post is an introductory material, created for our online and classroom course of "Basic Principles of 3D Computer Graphics Applied to Health Sciences". The training is the result of a partnership that began in 2014, together with the renowned Brazilian orthognathic surgeon, Dr. Everton da Rosa.

Initially the objective was to develop a surgical planning methodology using only free and freeware software. The work was successful and we decided to share the results with the orthognathic surgery community. As soon as we put the first contents related to the searches in our social media, the demand was great and it was not only limited to the professionals of the Dentistry, but extended to all the fields of the human health as well as veterinary.

In view of this demand, we decided to open the initial and theoretical contents of the topics that cover our course (which is pretty practical). In this way, those interested will be able to learn a little about the concepts involved in the training, while those in the area of ​​computer graphics will have at hand a material that will introduce them to the field of modeling and digitization in the health sciences.

In this first post we will cover the concepts related to 3D objects and scenes visualization.

We hope you enjoy it, good reading!

Chapter 1 - Scene Visualization

You already know much of what you need


Cicero Moraes
Arc-Team Brazil

Everton da Rosa
Hospital de Base, Brasília, Brazil

What does it take to learn how to work with 3D?

If you are a person who knows how to operate a computer and at least have already edited a text, the answer is, little.

When editing a text we use the keyboard to enter the information, that is, the words. The keyboard helps us with the shortcuts, for example the most popular CTRL + C and CTRL + V for copy and paste. Note that we do not use the system menu to trigger these commands for a very simple reason, it is much faster and more convenient to do them by the shortcut keys.

When writing a text we do not limit ourselves to writing a sentence or writing a page. Almost always we format the letters, leaving them in bold, setting them as a title or tilting them and importing images or graphics. These latter actions can also be called interoperability.

The name is complex, but the concept is simple. Interoperability is, roughly speaking, the ability of programs to exchange information with one another. That is, you take the photo from a camera, save it on the PC, maybe use an image editor to increase the contrast, then import that image into your document. Well, the image was created and edited elsewhere! This is interoperability! The same is true of a table, which can be made in a spreadsheet editor and later imported into the text editor.

This amount of knowledge is not trivial. We could say that you already have 75% of all the computational skills needed to work with 3D modeling.

Now, if you are one of those who play or have already played a first-person shooter game, you can be sure that you have 95% of everything you need to model in 3D.

How is this possible?

Very simple. In addition to all the knowledge surrounding most computer programs, as already mentioned, the player still develops other capabilities inherent in the field of 3D computer graphics.

When playing on these platforms it is necessary first of all to analyze the scene to which one is going to interact. After studying the field of action, the player moves around the scene and if someone appears on the line the chance of this individual to take a shot is quite large. This ability to move and interact in a 3D environment is the starting piece for working with a modeling and animation program.

 

Observation of the scene

When we get to an unknown location, the first thing we do is to observe. Imagine that you will take a course in a certain space. Hardly anyone "arrives rushed in’’ an environment. First of all we observe the scene, we make a general survey of the number of people and even study the escape routes in case of a very serious unforeseen event. Then we move through the studied scene, going to the place where we will wait for the beginning of the activities. In a third moment, we interact with the scenario, both using the course equipment such as notebook and pen, as well as talking to other students and / or teachers.

Notice that this event was marked by three phases:
1) Observation
2) Displacement
3) Interaction

In the virtual world of computer graphics the sequence is almost the same. The first part of the process consists in observing the scene, in having an idea of ​​what it is like. This command is known as orbit. That is, an observer orbit (Orbit) the scene watching it, as if it were an artificial satellite around the earth. It maintains a fixed distance and can see the scene from every possible angle.

But, not only orbiting man lives, one must approach to see the details of some specific point. For this we use the zoom commands, already well known to most computer operators. Besides zooming in and out (+ and - zooming) you also need to walk through the scenes or even move horizontally (a movement known as Pan).

A curious fact about these scene-observation commands is that they almost always focus on the mouse buttons. See the table below:


We have above the comparative of three programs that will be discussed later. The important thing now is to know that in the three basic zoom commands we see the direct involvement of the mouse. This makes it very clear that if you come across an open a 3D scene and use these combinations of commands, at least you will shift the viewer .


The phrase "move the observer" has been spelled out, so that you are aware of a situation. So far we are only dealing with observation commands. By the characteristic of its operation, it can very well be confused with the command of rotation of the object. As some would say, "Slow down. It's not quite that way. This is this, and that is that. ". It is very common for beginners in this area to be confused between one and the other.


To illustrate the difference between them, observe in the figure above the scene to the center (Original) that is the initial reference. On the left we observe the orbit command in action (Orbit). See that the grid element (in light gray) that is reference of what would be the floor of the scene accompanies the cube. This is because in fact the one who moves in the scene is the observer and not the elements. At the right side (Rotate) we see the grid in the same position as in the scene in the center, that is, the observer remained at the same point, except that the cube underwent rotation.
Why does this seem confusing?

In the real world, the one we live in, the observer is ... you. You use your eyes to see the space with all the three-dimensional depth that this natural binocular system offers. When we work with 3D modeling and animation software, your eyes become 3D View, that is, the working window where the scene is being presented.
In the real world, when we walk through a space, we have the ground to move. It is our reference. In a 3D scene usually this initial ground is represented by the grid that we saw in the example figure. It is always important to have a reference to work, otherwise it is almost impossible, especially for those who are starting, to do something on the computer.

Display Type


"Television makes you fatten".

Surely you have  already heard this phrase in some interview or even some acquainted or someone that had already been filmed and saw the result on the screen. In fact, it can happen that the person seems more robust than the "normal", but the answer is that, in fact, we are all more full-bodied than the structure that our eyes present to us when we look at ourselves in front of the mirror.

In order for you to have a clear idea of ​​what this means, you need to understand some simple concepts that involve viewing from an observer in a 3D modeling and animation program.

The observer in this case is represented by a camera.


Interestingly, one of the most used representations for the camera within a 3D scene is an icon of a pyramid. See the figure above, where three examples are presented. Both Blender 3D software and MeshLab have a pyramid icon to represent the camera in space. The simplest way to represent this structure can be a triangle, like the one on the right side (Icon).

All this is not for nothing. This representation holds in itself the basic principles of photography.

You may have heard of the pinhole camera(dark chamber). In a free translation it means photographic camera of hole. The operation of this mechanism is very simple, it is an archaic camera made with a small box or can. On one side it has a very thin hole and on the other side a photo paper is placed. The hole is covered by a dark adhesive tape until the photographer in question positions the camera in a point. Once the camera is positioned and still, the tape is removed and the film receives the external light for a while. Then the hole is again capped, the camera is transported to a studio and the film revealed, presenting the scene in negative. All simple and functional.


For us what matters is even a few small details. Imagine that we have an object to be photographed (A), the light coming from outside enters the camera through a hole made in the front (B) and projects the inverted image inside the box (C). Anything outside this capture area will be invisible (illustration on the right).


At that point we already have the answer of why the camera icons are similar in different programs. The pyramid represents the projection of the visible area of the camera. Notice that projection of the visible area is not the same as the ALL visible area, that is,  we have a small presentation of how the camera receives the external scene.


Anything outside this projection simply will not appear in the scene, as in the case of the above sphere, which is partially hidden.

But there's still one piece left in this puzzle, which is why we seem more robust to TV cameras.

Note the two figures above. Looking at each other, we can identify some characteristics that differentiate them. The image on the left seems to be a structure that is being squeezed, especially when we see the eyes, which seem to jump sideways. On the right, we have a structure that, in relation to another, seems to have the eyes more centered, the nose smaller, the mouth more open and a little more upwards, we see the ears showing and the upper part of the head is notoriously bigger.

Both structures have a lot of visual differences ... but they are all about the same 3D object!

The difference lies in the way the photographs were made. In this case, two different focal lengths were used. 


Above we see the two pinhole camera on top. The image on the left indicates the focal length value of 15 and on the right we see the focal length value of 50. On one side we see a more compact structure (15), where the background is very close to the front and on the other a more stretched structure, with a more closed catch angle (50).

But why in this case of 15 focal length, the ears do not appear in the scene?


The explanation is simple and can be approached geometrically. Note that in order to frame the structure in the photo it was necessary to bring it close enough to the light inlet. In doing so, the captured volume (BB) only picks up the front of the face (Visible), hiding the ears (Invisible). At the end, we have a limited projection (CC) that suffers from certain deformation, giving the impression of the eyes being slightly separated.


With the focal length of 50 the visible area of the face is wider. We can attest this to the projection of the visible region, as we have done previously.


In this example we chose to frame the structure very close to the camera capture limits and thus to highlight the capture differences. Thus we clearly see how a larger value of focal length implies in a wider capture of the photographed structure. A good example is that, with a value of 15, we see the lower tips of the ears very discreetly, in 35 the structures are already showing, at 50 the area is almost doubled, and at 100 we have an almost complete view of the ears. Note also that in 100, the marginal region of the eyes transverse the structure of the head and in orthogonal (Ortho) the marginal region of the eyes is aligned with the same structure.

But what is an orthogonal view?

For comprehension to be more complete, let us go by parts.


If we isolate the edges of all the views, align the eyebrows and base of the chin and put the superimposed forms, we will see at the end that the smaller the focal distance, the smaller the structural area visualized. Among all the forms that stand out the most is the orthogonal view. It simply has more area than all the others. We see this to the extreme right by attesting to the blue color appearing in the marginal regions of the overlap.

But, and orthogonal projection, how does it work?


The best example is the facade of a house. Above the left we have a vision with focal length 15 (Perspective) and right in orthogonal.


Analyzing the capture with focal length 15, we have the lines in blue, as usual, representing the boundary of the visible area (limit of the image generated) and in the other lines the projection of some key parts of the structure.


The orthogonal view in turn does not suffer from deformation of the focal length. It simply receives the structural information directly, generating a graph consistent with the measurements of the original, that is, it shows the house "as it is." The process is very reminiscent of the x-ray projection, which represents the x-ray structure without (or almost without) perspective deformation.


Looking at the images side by side, from another point of view, it is possible to attest a marked difference between them. The bottom and top of the side walls are parallel, but if you draw a line in each of these parts in perspective, that path will end up at an intersection that is known as the vanishing point (A and B). In the case of the orthogonal view, the lines are not found, because ... they are parallel! That is, we again see that the orthogonal projection respects the actual structure of the object.

So you mean that orthogonal view is always the best option?


No, it is not always the best option because it all depends on what you are doing. Take as an example the front views, discussed earlier. Even if the orthogonal view offers a larger capture area (D) if we compare the exclusive regions of the orthogonal (E) with the exclusive regions viewed by the focal length perspective 15 (F), we will attest that even covering a smaller area of pixels, The view with perspective deformation contemplated regions that were hidden in the orthogonal view.

Moraes & Salazar-Gamarra (2016)
That answers the question about whether or not people gain weight. The longer the focal length, the more robust the face looks. But this does not mean to fatten or not, but to actually show its structure, that is, the orthogonal image is the individual in his measurements more coherent with the real volumetry.

The interesting thing about this aspect is that it shows that the eyes deceive us, the image we see of people does not correspond to what they are actually structurally speaking. What we see in the mirror does not either.

Professional photographers, for example, are experts for how to exploit this reality and to extract the maximum quality in their works.

View 3D

Have you ever wondered why you have two eyes and not just one? Most of the time we forget that we have two eyes, because we see only one image when we observe things around us.  

Take this quick test.


Look for a small object to look at (A), which is about a meter away. Position the indicator (B) pointing up at 15cm from the front of the eyes (C), aligned with the nose.

When looking at the object you will see an object and two fingers.


When looking at the finger, you will see a finger and two objects.


If you observe with just one eye, you will attest that each has a distinct view of the scene.

This is a very simple way to test the limits of the binocular visualization system characteristic of humans. It is also very clear why classic painters close one eye by measuring the proportions of an object with the paint-brush in order to replicate it on the canvas (see the bibliography link for more details). If they used both eyes it just would not work!

You must be wondering how we can see only one image with both eyes. To understand this mechanism a little better, let's take 3D cinema as an example.

What happens if you look at a 3D movie screen without the polarized glasses?


Something like the figure above, a distortion well known to those who have already overdone alcoholic beverages. However, even though it seems the opposite, there is nothing wrong with this image.


When you put on the glasses, each lens receives information related to your eye. We then have two distinct images, such as when we blink to see with only one side. "

Let's reflect a little. If the blurred image enters through the glasses and becomes part of the scenery, transporting us into the movies to the point of being frightened by debris of explosions that seem to be projected onto us ... it may be that the information we receive from the world Be blurred with it. Except that, in the brain, somewhere "magical" happens that instead of showing this blur, the two images come together and form only one.

But why two pictures, why two eyes?

The answer lies precisely in the part of the debris of the explosion coming to us. If you watch the same scene with just one eye, the objects do not "jump" over you. This is because stereoscopic vision (with both eyes) gives you the power to perceive the depth of the environment. That is, the notion of space that we have is due to our binocular vision, without it, although we have notion of the environment because of the perspective, we will very much lose the ability to measure its volume.
Para que você entenda melhor a questão da profundidade da cena, veja a seguinte imagem.

To better understand the depth of the scene, see the following image.


If a group of individuals were asked which of the two objects is ahead of the scene, it is almost certain that most respondents would say that it is the object on the left.


However, not everything is what it seems. The object on the left is further away. This example illustrates how we can be deceived by monocular vision even though it is in perspective.

Would not it be easier for modeling and animation programs to support stereoscopic visualization?

In fact it could be, but the most popular programs still do not offer this possibility. In view of the popularization of virtual reality glasses and the convergence of graphic interfaces, the possibility of this niche has full support for the stereoscopic visualization in the production phase. However, this possibility is more a future projection than a present reality and the interfaces of today still count on many elements that go back decades.

It is for these and other reasons that we need the help of an orthogonal view when working on 3D software.

If on one hand we do not yet have affordable 3D visualization solutions with depth, on the other hand we have robust tools tested and approved for years and years of development. In 1963, for example, the Sketchpad graphic editor was developed at MIT. Since then the way of approaching 3D objects on a digital screen has not changed so much.

The most important of all, is that the technique works very well and with a little training you calmly adapt the methodology, to the point of forgetting that one day you had difficulties with that.


Almost all modeling programs, similar to Sketchpad, offer the possibility of dividing the workspace into four views: Perspective, Front, Right, and Top.

Even though it is not a perspective from which we have the notion of depth, and even the other views being a sort of "facade" of the scene, what we have in the end is a very clear idea of the structure of the scene and the positioning of the objects .

If, on the one hand, dividing the scene into four parts reduces the visual area of each view, on the other hand the specialist can choose to change those views in the total area of the monitor.

Over time, the user will specialize in changing the point of view using the shortcut keys, in order to complete the necessary information and not make mistakes in the composition of the scene.


A sample of the versatility of 3D orientation from orthogonal views is the exercise of the "hat in the little monkey" passed on to beginner students of three-dimensional modeling. This exercise involves asking the students to put a hat (cone) on the primitive Monkey. When trying to use only the perspective view the difficulties are many, because it is very difficult those who are starting to locate in a 3D scene. They are then taught how to use orthogonal views (front, right, top, etc.). The tendency is that the students position the "hat" taking only a view as a reference, in this case front (Front). Only, when they change their perspective view, the hat appears dislocated. When viewed from another point of view, such as right (Right), they realize that the object is far from where it should be. Over time the students "get the hang of it" and change the point of view when working with object positioning.

If we look at the graph of the axes that appear to the left of the figures, we see that in the case of Front we have the information of X and Z, but Y is missing (precisely the depth where the hat was lost) and in the case of Right we have Y and Z , But the X is
 missing. The secret is always to orbit the scene or to alternate the viewpoints, so as to have a clear notion of the structure of the scene, thus grounding its future interventions.

Conclusion


For now that’s it, we will soon return with more content addressing the basic principles of 3D graphics applied to health sciences. If you want to receive more news, point out some correction, suggestion or even better know the work of the professionals involved with the composition of this material, please send us a message or even, like the authors pages on Facebook:



We thank you for your attention and we leave a big hug here.

See you next time!
BlogItalia - La directory italiana dei blog Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.