Data Log

Screencasts on Computer Vision Research


I've started posting screencasts on how to look up computer vision authors and follow their research: Video lectures on Quora.

PhD and moving on


It's been a long time since I've blogged on my site. I've moved most of my activity to Quora, where I mostly answer and ask questions in Computer Vision, Machine Learning, Artificial Intelligence and Psychology.

I'm currently interning at Virginia Tech under the supervision of Devi Parikh, and I'm starting my PhD in Dynamical Neuroscience this fall at UCSB under the supervision of Miguel Eckstein.


How to get started in Computer Vision - A Guide for the CS undergrad


Guide on Quora

ThinkUNI and Philanthropy



Last year was a great success due to two events that were critical in my life.

The first was helping to organize (I was co-organizer and host) of ThinkUNI. ThinkUNI was a meeting of Universidad Nacional de Ingenieria alumni who have successfully done internships and PhD's at universities around the world, primarily USA and Europe. We were encouraging and giving instructions on how to do this so we could reinforce a diaspora movement of more Peruvian students in science and engineering. Each speaker also gave a brief introduction of their field of research.

A lot of people were very happy with the event of this magnitude, as they dubbed it "inspirational". We had a full 300-seat small auditorium and with another 300 people left outside on waiting list in line, and another 400 viewers via twitcam. You can find more information and slides at the ThinkUNI website:

I also ended the year with a philanthropic act, as I helped preselect and award four $3300 ($13'200 total) scholarships to students of Universidad Nacional de Ingenieria. Apparently the authorities and directors from ProUNI liked the speech I delivered at the Manuel Pardo y Lavalle Prize ceremony (the most important ceremony of the year where all 26 Valedictorians of each engineering field are awarded personal computers), and decided to give me some leverage on their decision making for picking the next ProUNI fellows. They've been awarded research internships at the following places:

Kevin Villegas: Purdue University - Nanotechnology
Pedro Baldera: Yale University - Chemistry
Jorge Aparicio: Purdue University - Nanotechnology (GISCIA member)
Victor Paredes: Texas A&M University - Robotics (GISCIA member)

Note1: Now that I've finished my undergrad, I'm writing my research guide: "How to start off in research (and survive)" targeted to other undergraduates from developing countries who also want to outperform in research, despite the multiple socio-economical setbacks they may have. Time to wait it out for grad school and hope for the best! (Read more below)

Note2: I've been visiting some AI and Computer Vision blogs, and I've forgotten how important it is to update more and more information and ideas about these matters on my website! I'll be posting an AI FAQ soon. In the meantime, feel free to follow me and read my answers on Quora:

Grad School Applications & News


Deza, A., Jammalamadaka, A., Manjunath, B.S. “Vesselshift: A mean-shift based method for neurite tracing”, (submitted) IEEE International Symposium on BioMedical Imaging (2013)

Deza, A., “When the Gold Standard goes Gray” , International Computer Vision Summer School - Computer Vision and Medicine Essay Contest, Sicily, Italy. July 2012

Deza, A. “Feature Vector based Image Segmentation”, Machine Learning Summer School Poster Session, West Lafayette, IN, USA. June, 2011

NEWS: I'm currently working long-distance with Professor Devi Parikh on Scene Understanding and the Cloud. I'll be going to Virginia Tech to perform a short internship with her (March-August '13) prior to starting grad school (still don't know where I'll end up). Devi and I met at UCSB at a conference she gave on Relative Attributes that won the Marr Prize (The Marr Prize is one of the most prestigious awards in Computer Vision in the world).

If you are reading this and are from admissions, you can find most of my latest updates here, at the Home section as well as on the Files section.

Link to a youtube interview I gave:


Link to a lecture I gave on how life is on California, Computer Vision, and how to get research internship opportunities:


Link to GISCIA Lab website (the Lab I'm currently directing):
Don't forget to check out our Research and Publications section!

Related GISCIA videos (Our Biped Robot walking in action):


The video is a sample of the quality of robotics and related A.I. project we've done, thus winning multiple national awards.
(Designed and programmed by my friend Edison Alfaro)

NEWS: A photo of me presenting the next ProUNI fellows and scholars:


From left to right: Arturo Deza (me), Kevin Villegas, Pedro Baldera, Jorge Aparicio and Victor Paredes.
Bottom: Alberto Benavides de la Quintana, founder of ProUNI: an NGO that awards scholarships to the most brilliant students of Universidad Nacional de Ingenieria. He is also the founder of Buenaventura Mining Company, a Harvard University alumni and one of Peru's richest men.

The 4 are great friends of mine and I influenced the decision on the amount of money given to each one of them for their future internships. I also gave a speach at the ceremony that was originally targeted to the the Valedectorian's of each one of the 26 Engineering majors that Universidad Nacional de Ingenieria offers.

The places and areas where they will be going for their internships are:

Kevin Villegas: Purdue University - Nanotechnology
Pedro Baldera: Yale University - Chemistry
Jorge Aparicio: Purdue University - Nanotechnology (GISCIA member)
Victor Paredes: Texas A&M University - Robotics (GISCIA member)

Vesselshift: my 1st submitted paper!


A couple of days ago (November 7th) I finally submitted my research in a 4-page paper to the IEEE International Symposium on BioMedical Imaging.
I guess it can't be publicly available yet because its in press, but here's a sneak peak on the title.


It feels really good to accomplish this, regardless of the committee decision, to finish a piece of work and submit it to an international conference
with proceedings has always been one of my undergrad goals. Life really does reward you if you put enough effort, so remember that regardless of where you come from: Do your best & no excuses.

Tools for getting started in Computer Vision research?


This is the pre-game of the Short Manual I'm starting called: "How to do research as an international undergraduate (and survive)" I'll be finishing it by the end of the year due to heavy workload and Grad School Applications.

Who's boss?

Conference/journal AI EF reputation score:

Professor h-Index searches:

To look up papers:

Top Computer Vision Conferences:

Computer Vision Deadlines:

My personal AI-related book list:

Human Vision:
Vision: A Computational Investigation into the Human Representation and Processing of Visual Information - David Marr
Steps Towards a Theory of Visual Information: Active Perception, Signal-to-Symbol Conversion and the Interplay Between Sensing and Control - Stefano Soatto
Basic Vision: An Introduction to Visual Perception - Robert Snowden, Peter Thompson and Tom Troscianko

Cognitive Science:
Incognito - David Eagleman
Proust was a Neuroscientist - Jonah Lehrer

Machine Learning & Statistics:
Machine Learning: An algorithmic perspective - Stephen Marsland
Probabilistic Graphic Models - Daphne Koller and Nir Friedman
Pattern Recognition and Machine Learning - Christopher Bishop

Computer Vision:
Multiple View Geometry in Computer Vision - Richard Hartley and Andrew Zisserman
Computer Vision a Modern Approach - David Forsyth and Jean Ponce
Visual Object Recognition: Synthesis Lectures on Artificial Intelligence and Machine Learning - Kristen Grauman and Bastian Leibe
Computer Vision: Algorithms and Applications - Richard Szeliski

Computer Vision & Image Processing Coding:
Programming Computer Vision with Python - Jan Erik Solem
Learning OpenCV - Gray Bradski and Adrian Kaehler
Fundamentals of Digital Image Processing: A practical approach with Examples in Matlab - Chris Solomon and Toby Breckon

Introductory Graph Theory - Gary Chartrand
Think Complexity - Allen Downey
Python Algorithms: Mastering Basic Algorithms in the Python Language - Magnus Lie Hetland

Tech related Magazine's and Youtube channels you should check out:
Wired, IEEE Spectrum, TechCrunch, TED, BigThink, Sixty Symbols, GISCIA (the Lab I run).

and of course my #1 source for everything:

Kudos to Zoya Gavrilov, for having an excellent list of valuable books for getting started in A.I.
Her list can also be viewed at her webpage.

Grad School Apps


I've been busy these days filling out my grad school applications. I've already decided to go for Computer Vision, and I'll be applying to several unversities in the US to accomplish this. Some will be CS departments other Brain Science becuase it is equally important for me to get to know the Human Visual system and how it works.

Here's a quick picture that I got after meeting with Don Alberto Benavides de la Quintana, the founder of Buenaventura Mining Company. He's the Mining Guru and pioneer of Peru and one of Universidad Nacional de Ingenieria's most accomplished alumni. I believe he got his B.S. in Mining Engineering and then went to Harvard University for a M.S. in Mining Geology.

This picture means a lot to me because he sponsored my research at UCSB by giving me a fellowship. He's been one of Peru's technological leaders, and a personal inspiration. This picture also gives me a sense of responsibility to continue with a new era of R&D in Peru after I get my PhD.

" Buenaventura is Peru's largest publicly-traded precious metals company and a major holder of mining rights in Peru. The Company is engaged in the mining, processing, development and exploration of gold, silver and other metals via wholly-owned mines, as well as through its participation in joint exploration projects. "

He also happens to be one of Peru's wealthiest and philanthropic person.


News: I'll finally be submitting my research paper to the IEEE International Symposium on BioMedical Imaging by the end of the month! A year of research and hard work finaly paying off ! I'll promise to upload it for everyone once it's finished!

My Talk on doing Internships abroad and Computer Vision 101


In this youtube video Modesto Montoya also invited my to give a talk at the Physical Sciences Department of Universidad Nacional de Ingenieria at Lima, Peru. Modesto was the one who invited me the week before to my first radio interview and is the interviewer himself. He's a very smart and respectable scientist who got his PhD in France studying Particle Physics. In Peru he's one of the popular science disseminators, which is really admirable because we're still learning the value of research in Peru.

I talk about my life in California: The good, the bad and the ugly - the lab life, the social/party life (which was actually quiet awesome) & talk about some personal anecdotes and then move on to an intro of AI and Computer Vision where I comment a bit about the research of Alyosha Efros (CMU), Aude Oliva(MIT) and Antonio Torralba(MIT). Finally I give some advice on how to get international research opportunities and pointers on how to start off in research. At the end there is a short Q&A where I answer some questions.


video link:

Radio Interview


Today was the first time I got a live radio interview. It was really cool, I talked about my reseach on AI and neuroinformatics and on how to manage to get international outreach as a researcher. The interview is in spanish but you are more than welcome to check it out!



GISCIA - Research Lab


After coming back from USA and Europe to Peru, I was given the role of President of GISCIA research group at my university. This is the second year I'll be running the lab. I was named President for 2011 year round, while the first half of 2012, was ran by a good friend of mine & colleague: Victor Paredes who along with Jorge Aparicio and Santiago Cortijo were the main head staff of the group. They are also the responsible for designing and making the Humanoid Robot, that has won several national awards!

Here is a link of the formal GISCIA research lab website, we're now updating as much video/photo information as possible, the papers are still accesible to everyone:

Here is a link of our public Facebook group where we post a mashup of Sci/Tech information along with our lab research updates. This facebook fanpage has barely started a week ago:

Here is also a link of the lab's group email in case anyone wants to contact us: moc.liamg|inu.aicsig#moc.liamg|inu.aicsig

The Brady Prize Essay


This is the Brady Prize Essay I submitted for the ICVSS - International Computer Vision Summer School essay competition. Even though I didn't win, I felt good to compete with other PhD students on debating about Computer Vision and Medicine. In this essay I answer the question : At what point does the human diagnostician’s eye no longer remain the “gold standard”? , with a neuroinformatics / brain science approach.


PDF version : Download here

Link :

Note : Even though I do not study at UCSB, I chose to represent that institution becuase the research I held there during my internship was focused on neuroinformatics - the approach I used in the essay.

I am still partly affiliated with the University of California - Santa Barbara (UCSB) as I continue to work on a paper with the Center for BioImage Informatics, but I am a student of Universidad Nacional de Ingenieria (UNI), at Lima - Peru.

Some of these ideas were also discussed at the 4th National Neuroscience and Complex Systems Symposium in Lima, Peru where I gave a talk streaming live from Paris, France. More details can be found on the Summer Schools in Europe blog entry.

Link :

Grad School Application Guide for International Students


This is a guide with pointers on how to get into a high ranked university for grad school. (Un)fortunately, I still don't know if it works because I myself am still an undergraduate to start the PhD program Fall 2013 if winds blow the right direction. But anyway here we are, and I am happy to share this information and my thoughts on this process with you. Please write to me : moc.liamg|azedorutra#moc.liamg|azedorutra for any feedback and/or additional comments, and don't feel afraid to share it with whomever you like.

I hope you enjoy it!

PDF version: Download Here

By the way, the guide is written in a very informal blog-like style.

Computer Vision Summer School weeks in Europe



As I relax in front of the Baia Samuele Hotel Pool at Sicily in Italy, I now advocate some time to write about how awesome the current summer school is: ICVSS, and how amazing the summer school last week in Grenoble was: CVML/VRML 2012.

From the agressive and straight forward speaking style of Cordelia Schmid to the memorable AZ ( Andrew Zisserman ) in a friendly FAQ session, to the stand-up comedy style rivalry between Antonio Torralba and Alyosha Efros, the playful Aude Oliva, the mathematical humor of Francis Bach, and the clean style of Deva Ramanan - and of course to the overtime social all-star Jean Ponce (the John Travolta dancing look-a-like) ; All the speakers were exceptional and leading researchers in their field.

It was also incredible to finally meet some of the post-docs and graduate student's I used to follow in the Computer Vision community. By accident I had stumbled upon Tomasz Malisiewicz: author of a computer vision blog and a former grad student of Alyosha Efros at CMU, and now a post-doc working with Antonio Torralba. "You're the guy from the computer vision blog?! That's awesome!" He was very modest about it, and he is clearly an academia role model to follow. We became very good friends during the summer school. The same went for Varun Ganapathi, who is currently a research scientist at Google and had recently finished his PhD in Stanford under the supervision of Daphne Koller. I remember reading his papers 2 years ago, and when I went up to him I just said "Varun, from Stanford right? B.Sc. in Physics, am I right?". Very brilliant and funny guy!

After introducing myself to most of the grad-students I would tell them in a jokingly fashion that I'm not a stalker, I'm just a big fan of their work - unless I am actually a stalker and I just don't know it yet (laughs,chuckles & drum rolls).

Contact-wise summer schools are just amazing, you have the chance to meet people from all over the world and to exchange different ideas (beyond research) at many levels academia, industry, undergrads, post-docs, grad students and professors. By doing this, you get to enrich yourself with different cultures and ways of thinking and if you're a competitive person like me, you can benchmark yourself with other people from around the world. What are the undergrads over the world up to? What type of research have they done? What other activities do they do? What are their ideas about life? If you want to go to Berkeley, talk to people from Berkeley, if you want to go to CMU talk to people from CMU. Summer schools offer discovering the magic of a small scope of people from various backgrounds and institutions.

Completly random comment: I would introduce myself as the Peruvian - Californian hybrid, as I would usually get the phrase: "You look Peruvian, but talk like you're from California ? (puzzled face)".

Break point ! --

On July 14th, after the 1st summer school in Grenoble I had to give a talk at the National Neuroscience and Complex Systems Symposium in Lima, Peru. This was almost impossible, because I was in Grenoble, so I had the great idea to go to Paris for the weekend and give the talk streaming live from Paris. Not only did I enjoy the beautiful view of the Eiffel Tower during my stay at the hotel, but also July 14th happened to be France National Day, so I checked out the fireworks below the Eiffel Tower towards Trocadero. It was probably one of the best Satuday's of my life. Of all the list of speakers in the symposium, that contained gradstudents, post-docs and Professors, I was surprised to be the only undergrad. The theme of my talk was on my research done at UCSB on Neuron Tracing and the DIADEM Challenge. I can't post much about my research yet, as it's still finishing up to be submitted for a Journal later this year. But feel free to check out the DIADEM Challenge webpage to have a short glimpse of it!


End of Break point ! --

Today, the summer school is almost ending and there will be a beach party social later in the evening. Academia is looking good, I can't imagine something more perfect than to travel around the world, exchange ideas and presenting your research. Maybe the money isn't as high as industry but if you like teaching and doing research, I think it's the place to go!

Some funny jokes & anecdotes I've learned in these two weeks:

"Computer Scientists parties are like SVM's with 5 girls in the middle and a concentric cloud of points (100 dudes) around."
- Credits to Italian ladiesman Simone - finishing his PhD at Università di Trento.

  • This has changed over time though - if you know what I mean ;)

A sexy lady walks in the bar and an intrepid Machine Learning researcher looks over checking her out:
"You have a nice posterior"
To which she replies, playfully:
"Thank you, I'm working on my prior!"
- Credits to Jeroen Chua from University of Toronto for that one.

Keynote lecture highlights:

"So to publish in CVPR, the week before you have to rush looking for your data and include some random fancy Dirichlet, Markov process mambo-jambo"
- Alyosha Efros joking about how Computer Vision datasets are getting larger and larger in his VRML/CVML presentation in Grenoble.

"You see, I tricked you!"
- Aude Oliva outsmarting the audience during her live image memorability tests.

"SVM was cool 10 years ago, now awesome people are using Logistic regression"
- Francis Bach : ML expert - before it was mainstream ;)

"So if you ever wonder who labels all the LabelMe dataset, the person behind this is my mother!"
- Antonio 'the matador' Torralba from MIT making the crowd go wild.

- Stefano Soatto showing the crowd who's boss with his strong & agressive yet intelectually stimulating speaking style at the ICVSS Reading Group session

"Vision is like evolution, we have to fail in order to make progress"
- Jitendra Malik , when we talked offline . He's a very wise man.

Some extra links:
CVML/VRML 2012, Grenoble hosted by INRIA. (check out the online materials ;)) :
ICVSS 2012, Sicily hosted by Università di Catania.

Now I have to go back Peru next week and wrap up the journal paper I'm working on (long-distance w/ UCSB) if I want to get into a good place for Grad School.

To Gifiniti and Beyond!


And yet again, I immersed myself in a new adventure, which has now come to an end!

In January as you might know, I moved to Isla Vista, California to work on a 6-month research internship in BioImage Informatics: Implementing and re-designing a Neuron Tracing Algorithm (I like to say 3D Neuron reconstruction just because it sounds cooler in multiple social events, I hope they DO mean the same though!). Any how, as part of the UC- Santa Barbara culture, socializing (a more professional word instead of partying) is something common and a habit that one must adapt to, even though it tends to become quite excessive and even uncontrollable at times.

Nevertheless, as a scientist, I like to experiment, have some fun, explore the other worlds, try new paradigms and question more and more about the ever-chaotic (yet beautiful) world of NorthAmerican college socializing, that in UCSB is taken to the max. In one of these overtly ridiculous feasts, I bumped into a Russian guy: Sieva. Sieva was a cool guy, very self-confident, arrogant at times, but overall secure of whatever idea he transmitted. I had the impression that Russians tend to drink a lot (recall Vodka), and yet I met him in a similar circumstance. He enthusiastically explained to me his business idea: a new way of gift giving, like a social gift giving platform, with collective intelligence. He still had a diffuse concept of what it could be, but he knew it would be big, and he convinced me it would be big. A couple of drinks later and we were exchanging numbers after I talked to him about my background on Artificial Intelligence and Machine Learning, and he told me to meet up with the team the upcoming Monday for a brainstorming session.

Later that Monday I woke up exhuastively tired, and noticed I recieved a text message from him at around 7 am, saying that everyone in the team should meet up at 7pm at Phelps Hall 1408. Man did this guy have some guts to wake up that early! (I usually get more productive at night, and thus wake up late). 7pm it was, and I got to meet the rest of the team. All of them were brilliant, and I just had the feeling that we would get somewhere. Sometimes one can perceive an instantaneous exotic success vibe from multiple people, these guys ranged from all of the possible non-orthodox backgrounds you could imagine: a taxation-specialist Surfer dude , a CERN physicist PhD, a History major with a love for partying, an overtly confident russian CEO, a tech-savy CS hacker girl and a wannabe rapper finance analyst, and then of course me: the Peruvian ninja coder. Never in my life had I scene such a diverse talented group. We were on to something big! You could feel it.

That was our first business meeting, and today was my last. What started out as being a group of strangers, we eventually became a brotherhood of californian business partners, and I include the word californian, because the lifesytle was 'Californian': Waking up early for business meetings, coding over spring break, talking on the phone for long hours debugging multiple lines of code, 'SCRUM'-ing and throwing out ideas by email, facebook, dropbox and wild-night partying. It was just like Silicon Valley (or I hope it was, if my mind isn't oversatured by Hollywood movies).

Five Months after our first meeting, we won 2nd place out of the 46 initial teams in the NVC UCSB Tech Start-up competition, winning a $2500 check for our efforts. I look back at the team with a sense of amazement, yet shame at the same time, because I would like to stay in Santa Barbara and help them grow, and see how the business evolves, but I promised myself a sense of duty in Academia, because I still want to lead my country's technological develpment when the time has come. But there's nothing to regret: the contacts & friends I made, and the Start-up know-how strategy was totally worth it !

What was my role at Gifiniti?

I was an Assistant Coder of Derek Barge , who was the Lead Coder (and Physics Genius). Additionally, I shared and discussed multiple back/front-end development ideas with Derek Barge and Saiph Savage (Lead Algorithm Developer). Most of our progress was then discussed with the biz-team to get immediate feedback, such as GUI preferences, required inputs & wanted outputs. Saiph was in charge of the Machine Learning Context-based retrieval searches, kudos to her for her brilliant expertise on websearches and queries. During implementation phase, I refined my shell scripting, worked on my python hacks (Jinja2 and flask modules), learned Javascipt, CSS, HTML & AJAX.

What can I say about Gifiniti?

I believe that two years from now it will be a great company, very successful, and will start to get well known. The amount of push that everyone in the team puts is non-measurable. It is as if, everyone has a perfect balance and control in their life, and everyone in the team has the feeling in the back of their mind that the company WILL be big (despite the competition), and that it IS THE billion dollar idea.

How has Gifiniti helped me grow and become a better person?

I tend to value a lot the sense of diversity. The richness of ideas is so diverse and edge-cutting at Gifiniti that this in return makes you learn from each other team member. Probably the best lessons I learned from working with everyone in the team is discipline, courage and self-respect. Discipline because waking up early in the morning and splitting your time between two important projects(Gifiniti / Lab Research) takes a huge amount of time, dedication and mental focus. Courage becuase it takes guts to actually expose yourself and pursue the billion dollar idea with a lot of competition out there! Self-respect because everyone in the team respected their work, but also their body and mind: they would practice a sport, go to the gym, have some fun on weekends - and on occassions their brain would still nurture themselves from the mysterious 'spark' of doing non-work related activities.

Arturo, but you said that you would go to Academia and become a Computer Vision Researcher? Why did you waste your time on that Gifiniti 'nonsense'?

Academia is for sure, and I would like to become a Professor in the long-run. Gifiniti actually gave me the biz-world insight that I needed for the eventual crutial moments further on in Academia. Every Professor needs funding, and funding is important to actually get work done, buy more lab material, hire more post-docs, motivate the grad-students,etc… Of the little that I know, Academia is not just research, networking and knowing how and who to negotiate is a HUGE plus, and this wild-exciting start-up adventure was an excellent first step to getting the know the bigger picture of Science/Technology related businesses.They once told me that in Hollywood: It's not about how much you know, but who you know … and maybe Academia is not just about how much you know, but who you know as well (recall Paul Erdos ;) )


News and Media:

Why is Social Gift-Giving a Billion Dollar Market?
Gifiniti : (Sooner than expected ;) )

Want to start a start-up business? Check out our Executive Summary!


Exploring UC Santa Barbara


It has been almost 3 months since I haven't posted something relevant on my website. This reminds me of what people say about Tenured Professors that stop doing research .. maybe I got a little bit cocky after getting my Research fellowship.

Nevertheless, I'll try to keep my head in the game and still update you guys on my research and some new things I've learned after spending 2 months at UCSB.

1) Never underestimate an invited speaker. One of the first lab meetings I had at UCSB, they invited Devi Parikh, a Professor at Toyota Institute of Technology, who came to talk about her research on Human debugging applied to natural images. To be honest at the beginning, I underestimated her talk because of the place she was coming from. I had never heard about TTI-Chicago (which is a really good place to get a CS PhD by the way!), and what was even worse, I hadn't heard her name before at the vision community. But I was terribly wrong, and in fault.

I do remember enjoying the part of her talk when she started talked about Prof. Torralba's and Prof. Oliva's Research at MIT on Scene Understanding. I loved those papers and remember reading them all night back in Peru. Suddenly, as I swept back at them again after her talk, I noticed that she was one of the main authors as well! Wow! I had asked her several questions, without knowing she was actually behind the intellectual creation of the paper. What was even more surprising is that I didn't know what the Marr Prize was (oh silly me!), and she had been awarded that prize in 2011 for her paper and work on Relative Attributes.

For those who do not know what the Marr prize is :

Weeks later I sent her an email asking her about some parameters and results on her paper on Image Memorability! She was really helpful! And I don't regret asking her questions, without previously knowing who she was. If I had known, I would have probably saturated her with all of the ideas I had on Visual Perception! She was a very charismatic woman and Professor, I would definitely like to work with her later on!

This kinda' brings me to another thing I learned here: Know ALL the authors in your favorite papers. Moreover, do not underestimate or overestimate a person by reference of the university/institution they come from.

As an undergraduate we usually just see godlike people as students, staff and professors who are from the "top 10 schools". This actually becomes unimportant when making decisions for grad-school. Of course everybody wants to go to MIT and Harvard, but is the Professor you really want to work with there? Is the research you want to do there? It is very important to have that in mind.

Devi Parikh :
TTI - Chicago :
Scene Understanding & Memorability :

2) Remember that you have a Brain! I usually had a repulsion towards Biology since High school. Maybe this is not 100% correct because I enjoyed the Microbiology class and using a Microscope to check out what's happening in that infinitesimal world. Regardless of how it happened, I just remember 'not liking' Biology (and chemistry), and pursing the other sciences such as Math and physics.

I had a mind-shift once I arrived here, and I realized that Biology is very important. This is also due to the fact that in some way or another Cognitive Science is related to some Biological aspects of NeuroScience.

My advisor Prof. Manjunath assigned me to work on Neuron Tracing, and I am supervised by a grad-student here. She knows a lot about Computer Vision and Biology as well. The project is based on the DIADEM Challenge : Now I am basically working on two parallel research projects. One is Natural-Image based related to Scene Understanding, and the other one is Bio-Image related. This new project makes me affiliated to the Center for BioImage Informatics at UCSB. There are a lot of brilliant people there as well!

It is because of this last assigned project that I realized that the Brain is still unexplored, and full of mysteries. I may not pursue a career in BioImaging, but getting to know how the neurons and dendrites work and interact is just fascinating! I wouldn't have discovered some of their properties or modeled how it works, for research purposes, if I hadn't been involved. What makes is even more interesting is the type of analogy I came up with the other day, as I was biking back from the lab to go home, and where I always had a personal conflict on weather or not I should pursue a PhD in Physics which really fascinates me as well! Basically the ultimate goal of Physics is to know how the Universe works (or at least I hope so), and yet we know so little about ourselves, how we as human beings work and operate biologically and what is even more difficult: How do we think, make decisions, be creative? It is in this context that I said to myself : "The Universe is nothing but God's Brain". Both are two great mysteries that will have to be solved! (I am agnostic by the way)

1) Meeting people here is easy and great! If I wouldn't have gone out a specific Saturday night, I wouldn't have met some really brilliant people with whom we are now making an A.I. oriented Start-up Company for gift recommendations.
2) Freebirds

1) Maturity in these types of exchange programs is an implicit consequence, of meeting new people, living by yourself, exploring a completely new world, and envisioning life in a different way.
2) After high-school I remember I never painted again. Five years later, living in California and enjoying the Isla Vista vibe, painting just grew back into me again. I think it is a consequence of independence that one can choose what activities he/she can do without family,peer or intellectual pressure. The freedom of exploring Art again had strengthened my learning and passion for Scene Understanding and Vision. (Marr's book on vision is getting really interesting by now!)

End of Semester Projects : Treasure Hunt and LimaSAT


By the end of the semester I took two courses that I enjoyed very much : Intelligent Video Game Design and Digital Image Processing. Both courses had a Final Project that me and my classmates put much effort and time. Here is a brief explanation of each one of them, and I'll be adding the paper, slides and code of each project.

Treasure Hunt: Our main goal is to solve the problem of: How do I get to the treasure with the maximum amount of Hit Points, thus modeling it as an Optimization problem. To achieve this we use an XNA videogame framework to program the A* algorithm to our Hero, and Semi-stochastic behavior to model the Enemies.

LimaSAT: Various representative Google Earth Satellite images were used to make a database of the different socioeconomic regions of the state of Lima, Peru. Different low level and mid level data was processed to classify these images and to estimate Urban and Rural development.

Treasure Hunt Slides: download
Treasure Hunt paper: download
Trasure Hunt Matlab code: To be uploaded
Trasure Hunt Microsoft XNA code: download

LimaSAT Image Database : to be uploaded
LimaSAT Slides : download
LimaSAT Code : to be uploaded

Benavides de la Quintana - PatronatoUNI Research Fellowship and Award


On Novemeber I was awarded the Benavides de la Quinatana - Patronato UNI Fellowship and Award, for a value of $5400.00 to pursue a research internship at the Vision Lab at University of California - Santa Barbara under the supervision of Prof. Manjunath for a period of 3-6 months. I am currently waiting for another research grant approval for more funds, to extend my stay.

If you are interested in knowing how I managed to obtain the grant, I have a few tips to give:

1) Start doing research. Talk to Professors, look up the different research labs, and different universities, look for contacts through Facebook, and read the universities research news and blogs that can give you ideas about what you want to do.
2) Be realistic. If you come from a university like mine where you do not have a $4 Million Nanotechnology lab with High-tech fancy equipment, try to work on something that doesn't require expensive hardware, and concentrate on the software. All you need is a computer!
3) Get into it. Look for different poster/research competitions in your local areas and/or abroad. In my case, If I probably didn't attend the MLSS '11 at Purdue University where I presented my first poster, chances are that I wouldn't have got this fellowship or a research internship.
4) "Just a little Patience" : If you are an undergrad student and want to do research, you must have in mind that research at the beginning is very confusing, and looks overwhelming. Most researcher's know this but don't complain about it. Manage and use your time wisely; you won't be able to learn state-of-the art algorithms in just a month, usually reading the latest papers and getting familiar with the jargon and algorithms can take up to a year (just for the basics).
5) My advisor once told me a very wise thing, that I always remember when I was about to quit research telling him "It's just too much information", to which he answered, "No one knows everything… As an undergrad you usually believe that you know everything, after your Masters, you start to realize that you don't know everything, and after your PhD, you feel very ignorant".

I hope this helped. And email me to arturodeza [at] if you have any suggestions or improvements that I might add to this guide.

Mentoring and Teaching Computer Vision


I'm writing a pdf where I'll share my experience on the challenging task of running a Research Lab as a undergrad, and the Computer Vision classes I offer to a small group of Junior Students. I'll upload it shortly!

December 23rd : Download the Guide here

Image Segmentation : MATLAB and/or OpenCV !?

25 /7/11

Last month I attended the MLSS 2011: Machine Learning Summer School at Purdue University, I had the pleasure to be one of the 2 undergrad students who were offering a poster presentation. It was actually my first poster presentation, and I was looking forward on getting feedback on my research and new insights from the other attendees (and friends), as well as anyone else curious on what I was presenting. During the poster presentation, I received great advice (and even some corrections!)on my research, and this helped me a lot. The question that did constantly come up is, why was I using OpenCV, and not MATLAB?

A year ago when I started off in Computer Vision, I decided to give OpenCV a try, because I felt very confident on my C++ programming skills, and because it was open-source and free of charge, unlike MATLAB that requires a license. And I also remembered picking OpenCV instead of MATLAB because I wanted to get into the "deepest understanding" of Computer Vision techniques and computing performance. After this year I did learn some concepts on memory optimization, as well as using pointers to handle my images efficiently. But there was a point where during my research, whenever I wanted to get into the state-of-the-art algorithms, most (if not all) algorithms were coded in MATLAB.

Here are a list of Computer Vision code web-pages that use MATLAB related to the Image Segmentation problem:

SLIC Superpixels:
Normalized Cuts:
Professor B.S. Manjunath UCSB Computer Vision Group Segmentation Algorithms:
The Berkeley Segmentation Dataset :

MATLAB is growing up in popularity, amongst many Vision researchers, and most of the opinions I get today on MATLAB against OpenCV, is that MATLAB is excellent for prototyping any vision algorithm one has in mind. Once the algorithm works, OpenCV and CUDA can later be used if one wants to optimize run-time.

The Weighted K-means Image Segmentation code (OpenCV) I presented at MLSS 2011 with some improvements: download

Machine Learning Summer School Poster Presentation



This is the poster I will be presenting on my Image Segmentation research at the MLSS at Purdue University.

You can download it here.

Voronoi Tesselation and Auto-grouping for Image Segmentation


Before Original Image with Clusters After
0_before.jpg 0_Clusters 0_after.jpg

I managed to find out that what I was working with was with a variation of the Voronoi Diagram, and as clusters started to get close to each other, some results were inconsistent, due to the fact that not only do I consider distance, like in most Voronoi Diagrams, but Pixel Intensity and Sobel operator results. To make my results more consistent, I started to test out auto-grouping algorithms, the one implemented above is just based on distance, the complete feature vector is only used to assign any determined pixel to its cluster.

As you can see, we test the algorithm on a really simple black and white photograph. It starts out with 30 randomized clusters, and as they move through space, some clusters are "swallowed" up by bigger ones, making the results less redundant. Ideally what we want is to make our algorithm learn by some other parameter, like what is the minimum amount of clusters needed to "segment" the image correctly. But this is actually a very ambiguous question. We as humans for example, can choose to see as much details as we like, ignoring or overrating different information. That is why we control the auto-grouping parameter by a threshold distance. Cluster Centers C1 and C2 that are at most T pixels away, join together.

The New Cluster center when Clusters are merged it calculated by averaging both x and y coordinates of the merging clusters.

Clustering Algorithm with Feature Vector support



It was a very heavy day, and after mid-term exams I decided to code a variation of the K-means Clustering (Segmenting for Computer Vision) Algorithm, using a Feature vector that included :

x position of Pixel
y position of Pixel
Intensity of Pixel
3x3 Kernel Horizontal Sobel Operator on Pixel
3x3 Kernel Vertical Sobel Operator on Pixel

  • I should make it clear that it is a variation of the K-means Clustering Algorithm because we compute the distance between each point to each initial cluster center. Nevertheless, we do not update the center of the clusters, thus making it somewhat clumsy, yet efficient as our experiments will show.
  • Image Pre-processing features include Histogram Equalization over the Gray Scale Image, and only working on 1 channel (Gray scale) for Intensity and Sobel Operators.

These features additionally include a weighted square root Energy Potential or Score. We try to find the minimum Energy to correctly match the W x H pixels of the image to its corresponding cluster center.

Here are some of the best results using different Weight parameters and 5 randomized Clustered Centers :

Without Gaussian Smoothing (apparently better results):

5C%20w1%20cow 10C%20w1%20cow5C%20w2%20cow 10C%20w2%20cow

With Gaussian Smoothing:

5C%20w1%20G%20cow 5C%20w2%20G%20cow10C%20w1%20G%20cow 10C%20w2%20G%20cow

Note: I will upload the code, once I find out if this algorithm is better than the K-means algorithm.
Note to self : Compare this algorithm with K-means to check which one is more efficient! (The one I just coded only requires 1 image sweep)

Color Filter OpenCV Code


To continue my research, it is necessary to apply the algorithm by color channels as well, so I coded a quick program that can separate each channel and show a graphical output of each one. The code also comes very handy for anyone who's starting out at OpenCV

Results vary for different Lower and Higher Threshold values.


Download Here

Mean-shift Segmentation : Third tests and Experiments


I've decided to show a graphical output of the labeling result. In this case, I'm particularly interested in selecting the "Main object" of the picture. I've been running my tests on the cow image for now, but I will then progress for other more challenging images, that present clutter, or a low degree of occlusion.


As you can see in this test, the program autonomously labels the image, and tries to identify the "main object" or protagonist using the mean shift algorithm as well as other techniques that I will comment on later on, once the results are worthy to publish. Some of these techniques include generation of random coordinates for uniform labeling, as well as a while loop that continues to process the image until a lower limit of permissible labels has been reached. Recall that as I mentioned before, my processing algorithm starts out with W x H labels and gets down to 25 in the case of the cow. Each label is represented as a different gray scale color. Some labels are still fuzzy and not really acceptable, nevertheless the shape of the cow is neatly recognized. What I am now working in on, is to remove those lines from the background that confuse the cow label with the background label, which is why the cow looks likes it is stretched horizontally.

Happy Programming, and I'll post the C++ code (OpenCV) when It's done!

Mean-shift Segmentation : Second tests and Experiments



Segmentation is working better, and results are more visible . I still need to complete the image contour (the cow in this case) for the segmentation to be acceptable.

Better results could be achieved if I use a 3 channel analysis, in this case, I just converted the RGB images to grayscale and used the MS algorithm to simplify the calculations. Unfortunately it comes with a price, because as you can see, even thought the cow is apparently "correctly" segmented, a fraction of the shadow below it is too. Notice too that the grass hasn't been separated from the horizon. This can be solved if we would reduce the number of labels for the given problem. If reduced to 3, results would surely improve. In our algorithm however, we start with W x H labels, and since the clustering method works like an Unsupervised learning ML method, it automatically adjusts the amounts of labels each image has, according to the fixed and non-fixed centroids of most superpixels, that end up grouping themselves.

Planning on adding MRF's or CRF's for better performance on more complex images.

Mean-shift Segmentation : First tests and Experiments



After about 2 weeks of programming and learning some AI, Statistical Clustering (Segmentation for Computer Vision), and reinforcing my OpenCV knowledge, I managed to finish a small 500 line code segmentation algorithm using basic functions. I basically started from the IplImage class and manipulated all types of data with additional vectors, to store some information, such as Labels and Pixel Intensity values. I'm posting, exited because the Mean-shift segmentation algorithm seems to work, nevertheless I think I need to work on some parameters in my code (like fixing an Intensity bandwith) to have applicable results. Anyway, here are some pictures of my premature Mean-shift Segmentation results:


There is still a lot to work on! I'll be posting my code once it is finished, just as I promised when I released my 2D Cross Correlation Algorithm and my RTAI Installation Manual.

OpenCV 2D Cross Correlation Code Release & Research Progress


I'm focusing on Professor Daphne Koller (Stanford) and Pawan Kumar's (Stanford) paper on scene understanding by efficiently selecting regions of an image & optimizing the segmentation process. After some time, I think I'll concentrate my work (taking advantage of the summer) on this paper.

I'm now understanding the mean shift algorithm applied to image segmentation. Linear Programming will also have to be included in my agenda, as well as learning some Machine Learning techniques using Bayes and MRF's. Fortunately I bought a Machine Learning book with python code to help me out, and Andrew Ng's virtual classes are also helping. I should also thank Professor PK Biswas from IIT for his virtual lectures on Digital Image Processing.

I've also checked out Sebastian Thrun's work on Probabilistic Robotics, and the Machine Learning techniques he was using to control Stanley to the finish line! Autonomous Navigation is looking good with the help of both Machine Learning and Image Segmentation

Daphne Koller's & Pawan Kumar's paper: Efficiently Selecting Regions for Scene Understanding CVPR '10

My first template matching algorithm implementation (the 2D Cross correlation program) turned out to be already preprogrammed on the OpenCV library by the cvMatchTemplate and cvNormalize functions. So much for those 400 lines of code!
Those interested in my 2D Cross Correlation OpenCV program can download it out, by clicking on the Files link on the bottom of the site or on this link:

New Alienware m11x Netbook with Ubuntu 10.04 and Windows 7 Dual boot . Hooray!

Happy Programming!

RTAI Manual & Speed Tests complete!


I finished and put all my RTAI information together. You can now check it up at the file's link at the bottom of this page, so you can download it as a pdf file.


I've added some RTAI speed tests, as well as a common bugs sections, as well as other important bibliographic material for those interested. I'll be posting later on my Computer Vision progress.

You can also download it here : RTAI Installation Guide and Speed Tests

2D Correlation Program for Pattern Recognition


For those unfamiliar with Pattern Recognition, it pretty much explains itself by its name. The aim in this research field is to recognize a pattern through different algorithms coded into the computer. These patterns may be of Audio, Visual, behavioral, etc.. Coming up, a Correlation algorithm has been implemented using C++ to recognize a Template within an Image (Hence 2D).

Wait a sec, Correlation? What's that? I barely finished high-school..
Correlation is a mathematical process where you compare 2 different functions (A & B), and obtain as a result another function (C) that tells you how much they look alike.

I still don't get it…
Ok, let's say we are going to buy a car:
(This would be our Template)


And we enter the car store, and we find ourselves immersed in a wide variety of options: different brands, colors, sizes, 4x4, sedan's, motor specifications,etc…
(These would be the different points on our Image)

flickr:5168322442 flickr:5168322482 flickr:5168322602

Now you of course have you're ideal option(your Template), and I have no knowledge about what you're preference is. So I, you're buddy and pal will start to give you my personal opinion, on which car your should buy. You of course, disagree on most options, agree partially on some, and agree almost completely on a specific car or two.
Here you have correlated your Ideal Car, with all available Car options. You have agreed on High-Correlated values, partially agreed on Low-correlated values, and showed disgust towards Negative-Correlated values!
NCC coeficients range from -1 to 1: High correlated values are close to 1; Negative correlated values are close to -1; Low correlated values are close to 0.

Fictional NCC values would be:

  • Yellow & Grey Car : 0.3
  • Yellow & Red Car: 0.9
  • Yellow & Purple Car: -0.2

The same thing goes for Image Analysis: The Computer starts correlating each region of the image with the template, to see witch regions match the most, in this case by pixel intensity.


Here is a Computer Program example:
(compiled with g++ 4.3, using OpenCV)

2D Normalized Cross Correlation (NCC) Formula used:


So Let's Say we want to detect the letter "o" in "Google", along with calculating its 2D position:
The "Template" is the image we want to find inside the landscape.
The "Image" is the landscape the computer will have as reference.


The "Whiter" the zones, the higher the Correlation value is. Does it make any sense to you?
NCC highest value: 0.97903;


Although very simple, and slightly powerful, there are more sofisticated techniques for Pattern Recognition, like Scaling & Rotating the 2D template, using a Canny(Edge-Detection) transformation,Fourier Descriptors and much more.


Future Work: Add Scalable and Rotational Templates, as well as CUDA GPU acceleration.

Ok nice job, but is Pattern Recognition really THAT important?
Breakthroughs in the multi-millionare Bio-technology Industry, like MRI Imaging and Connectome 3D reconstruction are using pattern recognition & other techniques to acclerate the scientific process. PR (Pattern Recognition) is also in our everyday Smartphones & Augmented Reality Devices! Not to mention the overcoming leap in Computer Vision Security Systems. PR along with machine intelligence will give Robots the methods needed to interpret & classify information. Can you believe that one of today's most sofisticated Robots is using Computer Vision and CUDA acceleration to fold towels? (Click here to watch the video). Last but not least, big companies like Google & Facebook are using PR techniques to classify their image searches and help you tag your friends on your vacation photos!

TED Video on connectome Reconstruction:
TED Video on Augmented Reality Systems " The Sixth Sense"

Got Milk ? Need code? email me at mdezaf [at] uni . pe

Tutorial on installing RTAI-3.8 on Linux Ubuntu 9.10 w/ kernel


1) Download RTAI-3.8

Download & uncompress RTAI-3.8 in a new folder inside your Documents folder called RTAI_vf.

2) Download Kernel
Open up a Terminal & Type:

$ cd /tmp
$ wget

Now in Super user mode:

$ sudo su
#tar -xjvf linux- -C /usr/src

Kernel is now uncompressed in /usr/src !

Important Observation :


Note: How do you know if you are using x86 Processor Architecture ?

Open up a terminal and type:

$ uname -a

You should get something like this:

Linux username-laptop 2.6.31-14-generic #48-Ubuntu SMP Fri Oct 16 14:04:26 UTC 2009 i686 GNU/Linux

Check that before "GNU/Linux" you get "i686". If this is the case, we are going the right way!

3) Apply the RTAI patch on the Linux Source

Open a Terminal & type:

$ cd /usr/src/linux-
$ patch -p1 </home/username/Documents/RTAI_vf/rtai-3.8/base/arch/x86/patches/hal-linux-
OBS: replace username with YOUR username.

4) Configure the Patched Linux Kernel:

(You need the QT Development Package)

Log in as root:
$ sudo su
#make xconfig

Remember to Check/Select our check-out/DeSelect these boxes:

* Support for large (2TB+) block devices & files.
* Symmetric Multi-processing support
* Local APIC support on uniprocessors
* Enable loadable module support.
* Choose form the Radio buttons your processor type (I happen to have a Core2).

Save configuration.

Most installation errors occur from mis-selecting one of the kernel configuration options. You should check any errors with the web or with other installation manuals.

5) Compile the Patched Kernel


#make modules

#make modules_install

6) Install Kernel

#make install

You will find these 3 files created for your Grub:

7) Create an initrd image file for the booting process.


8) Test Reboot


When you reboot your computer count to 20, just after GRUB initializes. I think this is necessary so that the kernel doesn't bug out. I really can't understand completely why this happens, but it's a recommendation. If everything goes ok, select your new kernel and wait for the system to boot up.

IF the system intializes without any errors, then reboot to your older kernel to continue with the RTAI configuration and installation.

9) Configure RTAI

Open a terminal in your predetermined kernel and type:

Replace username with YOUR username.
$ cd /home/username/Documents/RTAI_vf
$ mkdir rtai-build
$ cd rtai-build
$ make -f /home/username/Documents/RTAI_vf/rtai-3.8/makefile xconfig

Go to :
General-> Linux Source Tree and set (double click on the current address)the address to:

10) Build RTAI

Super user mode:
$ sudo su

11) Install RTAI

#make install

12) You're done! Time to test RTAI!


Reboot your system and enter your new kernel (wait for the 20 seconds as I told you).

Open up a terminal & type:

$ sudo su
#cd /usr/realtime/testsuite/kern/latency

Questions? Errors? Thank-you Notes?
Don't hesitate on contacting me! =)