My Techniques
During the age of film photography, a photographer, for the most part, had to capture the image as it would appear. Of course there were tricks a professional could do with a dark room and enlarger, but the options for getting a good photo primarily existed in the planning for light, composition, exposure, depth of field, and focus upstream of actually taking the picture. While the upstream basics still help, digital photography has opened up many options for the photographer while the picture is being taken or in post processing.
Some of you will see one of my pictures and wonder how it was done. In this section, I will cover the major techniques and software I use in my photography. I hope these techniques give you a jumpstart in expanding your own photographic capabilities.
I will cover my basic approach, HDR (High Dynamic Range), stereophotography, fractal creation, digital painting, and panoramic stitching.
Some of you will see one of my pictures and wonder how it was done. In this section, I will cover the major techniques and software I use in my photography. I hope these techniques give you a jumpstart in expanding your own photographic capabilities.
I will cover my basic approach, HDR (High Dynamic Range), stereophotography, fractal creation, digital painting, and panoramic stitching.
Basic Approach
Even with all wonderful options available with digital photography, you will have a higher percentage of great pictures to rejects if you master the basics of focus, exposure, composition, and aperture.
I like simple, so most of my pictures are taken with a basic camera (Canon T3i) and lens(standard Canon kit 18-55mm). I seldom use flash even in lower light situations (my dogs and wedding photos are an exception) although mastering flash and lighting opens up many more options to capture a scene. I have had much more expensive cameras and lenses, but I like the light weight and full features of the T3i. As technology advances, point and shoot cameras offer the ability to capture quality snapshots or better. Some even have useful features not available yet in more expensive cameras, so my next purchase of hardware will probably be a point & shoot rather than chasing high end/ high cost. I also have a kit 28-135mm that I sometimes use to get closer to the action. Both kit lenses are image stabilized. Longer focal lengths have their place, but they add weight and the need for a tripod.
I usually shoot jpegs but occasionally for subjects with a mix of bright and low light, I shoot raw as well. Raw format has all the unprocessed information captured by the camera before it is automatically down processed in the camera. It gives a higher chance of getting a good image, but creates much bigger files and must be processed with special software like Lightroom. The T3i and most new DSLRs produce high resolution images. The images on my web site have been drastically reduced for internet efficiency yet I think you will agree they hold up well. Shooting in hi res allows more cropping options and the ability to make large prints without pixelation.
I usually load my files from the camera onto my computer and view them in Picasa. I use Picasa to preview my photos, search for files, do simple post processing, and to print hardcopies. The user interface works for me and it's free software. I also do more extensive post processing in Adobe Elements (not free). When I shoot raw, I use Lightroom although its file management system causes me to avoid raw processing unless absolutely necessary. I also post my best photos on Flickr which is a popular sharing site that has some helpful features for organizing, sharing, and printing photos. Other software tools will be covered below under the specific technique topics.
Even with all wonderful options available with digital photography, you will have a higher percentage of great pictures to rejects if you master the basics of focus, exposure, composition, and aperture.
I like simple, so most of my pictures are taken with a basic camera (Canon T3i) and lens(standard Canon kit 18-55mm). I seldom use flash even in lower light situations (my dogs and wedding photos are an exception) although mastering flash and lighting opens up many more options to capture a scene. I have had much more expensive cameras and lenses, but I like the light weight and full features of the T3i. As technology advances, point and shoot cameras offer the ability to capture quality snapshots or better. Some even have useful features not available yet in more expensive cameras, so my next purchase of hardware will probably be a point & shoot rather than chasing high end/ high cost. I also have a kit 28-135mm that I sometimes use to get closer to the action. Both kit lenses are image stabilized. Longer focal lengths have their place, but they add weight and the need for a tripod.
I usually shoot jpegs but occasionally for subjects with a mix of bright and low light, I shoot raw as well. Raw format has all the unprocessed information captured by the camera before it is automatically down processed in the camera. It gives a higher chance of getting a good image, but creates much bigger files and must be processed with special software like Lightroom. The T3i and most new DSLRs produce high resolution images. The images on my web site have been drastically reduced for internet efficiency yet I think you will agree they hold up well. Shooting in hi res allows more cropping options and the ability to make large prints without pixelation.
I usually load my files from the camera onto my computer and view them in Picasa. I use Picasa to preview my photos, search for files, do simple post processing, and to print hardcopies. The user interface works for me and it's free software. I also do more extensive post processing in Adobe Elements (not free). When I shoot raw, I use Lightroom although its file management system causes me to avoid raw processing unless absolutely necessary. I also post my best photos on Flickr which is a popular sharing site that has some helpful features for organizing, sharing, and printing photos. Other software tools will be covered below under the specific technique topics.
HDR (High Dynamic Range)
A normal image taken by a digital camera does not have near the capability to capture the range of simultaneous tones that can be seen by the human eye. As a result. digital photos are compromises in processing to get the "best" image under this limitation. This is especially true in settings where there are a lot of deep shadows, a lot of highlights, and a lot of midtones. The settings for processing the image can emphasize the details in the dark areas and blow out the highlights or vice versa.
HDR is a technique to expand the apparent range of the captured image. In most cases, it involves taking multiple shots of the same subject at different exposures. The under exposure captures more highlight information, while the over exposure captures more information in the dark areas. Software is then used to merge the multiple images into one by combining the best parts of each of the images. (RAW files are the exception to needing to take multiple exposures as the multiple exposures are created in processing from one RAW image.) Because of the range of exposure times, the camera needs to be as still as possible so the parts can be digitally matched.
The software not only combines the images to get the most detail across the range from dark to light, it also analyzes the image and then "tone maps" the image to pick only the most used shades in mapping the combination into the limitations of the single photo capture.
I use the automatic multi-shot and exposure bracketing on the Canon T3i, to get 3 shots, -2, 0, and +2 f stops. Some cameras now have a built in HDR function including auto post processing in the camera itself. Some photographers also use many more than 3 shots, but I have found three is a good number for most situations. Because at least three shots are involved, it is very important to minimize camera shake or vibration either with a tripod or bracing the camera against a solid, stable object.
HDR images tend to have a look of their own based on the extremes in exposure it offers in the combination and tone mapping.
There are many software packages available for HDR. I use Dynamic Photo HDR. It is cheap, straight forward to use and has a powerful align function for those of us who shun carrying around a tripod. I have also seen great photos processed by Photomatix. There are at least five other packages that process HDR so take your pick.
There are many sources for HDR information including this guide which includes more great examples.
A normal image taken by a digital camera does not have near the capability to capture the range of simultaneous tones that can be seen by the human eye. As a result. digital photos are compromises in processing to get the "best" image under this limitation. This is especially true in settings where there are a lot of deep shadows, a lot of highlights, and a lot of midtones. The settings for processing the image can emphasize the details in the dark areas and blow out the highlights or vice versa.
HDR is a technique to expand the apparent range of the captured image. In most cases, it involves taking multiple shots of the same subject at different exposures. The under exposure captures more highlight information, while the over exposure captures more information in the dark areas. Software is then used to merge the multiple images into one by combining the best parts of each of the images. (RAW files are the exception to needing to take multiple exposures as the multiple exposures are created in processing from one RAW image.) Because of the range of exposure times, the camera needs to be as still as possible so the parts can be digitally matched.
The software not only combines the images to get the most detail across the range from dark to light, it also analyzes the image and then "tone maps" the image to pick only the most used shades in mapping the combination into the limitations of the single photo capture.
I use the automatic multi-shot and exposure bracketing on the Canon T3i, to get 3 shots, -2, 0, and +2 f stops. Some cameras now have a built in HDR function including auto post processing in the camera itself. Some photographers also use many more than 3 shots, but I have found three is a good number for most situations. Because at least three shots are involved, it is very important to minimize camera shake or vibration either with a tripod or bracing the camera against a solid, stable object.
HDR images tend to have a look of their own based on the extremes in exposure it offers in the combination and tone mapping.
There are many software packages available for HDR. I use Dynamic Photo HDR. It is cheap, straight forward to use and has a powerful align function for those of us who shun carrying around a tripod. I have also seen great photos processed by Photomatix. There are at least five other packages that process HDR so take your pick.
There are many sources for HDR information including this guide which includes more great examples.
This was my first attempt at HDR. I used both a trial version of Photomatix and a purchased version of Dynamic Photo HDR (this shot) getting great results from both. I braced the camera on the wall shimming it with a baseball cap to keep it level and took my customary 3 shots at 2f stop increments. This is an example of the value of HDR. The sun had set behind me leaving a faint light in the sky. Parts of the memorial were brightly spot lit, while other sections were totally unlit. By combining the three exposures, I was able to bring out the faint color of the sky and the unlit memorial parts with the brightly lit areas in the spotlights. Because of the extremes of light and dark at night, night shots are a good candidate for applying HDR effectively.
The neon lights of this dinner caught me eye, but I knew the camera could not capture the bright lights along with the more subtle shades in the side of the build. HDR solved the problem. I also needed a place to stabilize the camera, and a near by car roof did the trick with the added benefit of a neat reflection.
Another great night image where sky, bright lights, darker walls and reflections all balance. To steady the camera I had to lie on a steep slope in wet grass but the temporary discomfort was well worth the end result.
Despite all the night images shown above, HDR a works well in daylight. Here the combination of bright sunlight and dark shadows was ideally suited to HDR.
Couldn't resist one more HDR night image. A bridge rail use to stabilize camera and HDR integrated the bright lights, reflections, and dark areas of the Vatican. Note the detail captured even under difficult lighting. Many of my favorite works involve night time and HDR.
HDR programs have many controls on combining and adjusting images. These angels in St. Peter's were harshly lit on one side and the gold bowl overwhelmed the image. HDR allowed a balanced final image with a boost of contrast to highlight the antique nature of the holy water font.
Stereophotography (3D)
As a child, long before most reading this were born, I was fascinated by the 3D world in my View Master. I could not figure how this realistic depth could spring out of those 2D discs when they were popped into the viewer. But stereographics was not some new technology as people my grandparents' age had marveled at the same illusions decades before.
*
There are many depth cues that humans merge into a perception of 3D. Stereo is one of them. All the stereo viewing techniques that I can think of involve some form of presenting one image to one eye and a view from a slight offset to the other. Earlier viewers including the View Masters, grandma's classic viewer, and cheap plastic viewers used physical separation of two images. Projections systems typically use polarizing filters and glasses with matching polarization to get the proper images to each eye. Modern projection systems for video rely on projecting alternate frames at high frequencies with viewing through glasses with shutters that alternate on and off synced to the projection system. Home flat panels were a hot item in 2012/13 but seem to fall off the face of the earth in 2014. They typically run at twice the normal frequency of regular flat panels to provide each eye with the same number of frames. They also use synced shuttered glasses for viewing.
There are other methods for stereo viewing. One is called anaglyphs. They use color filters rather than polarizing filters to separate the frames for each eye. This limits the ability to use the full color spectrum which is one reason I don't produce anaglyph images.
While all my images can be displayed in any of these systems, they all require hardware in the loop. I want people to be able to see stereo without glasses or other hardware whether on a web page or 2D print. In fact, there is a way to do this and I have used it in this web site so most of you can see my stereo images in 3D . It's fittingly called "Cross Eyed Stereo". The method is simple. The right eye image is positioned on the left, and the left eye is positioned on the right. You then slowly cross your eyes until they converge in a third image fully in 3D. It's not hard but it does feel a little unnatural at first. It does , however, give you the ability to view amazing 3D images without the burden of lugging around special equipment.
So that's how you can view stereo, but how do you create it. First there are special stereo cameras. There are also lens attachments that provide a split image. For stereo photography, I like simple so I don't like to carry around extra equipment. I just use one camera. If you remember you are trying to mimic the separation provided by human eyes, you will get the basic idea. You need separation of about the offset of the eyes and you need to angle in just a little like your eyes do to focus on a point in space. For stereo, I usually take one image with my weight over one foot and then shift to the other foot for the second shot. I also pick a point in the image and center one of the focus marks over it. When I shift to the other foot, I make sure the focus point marker stays on the same place in the screen. Some people actually mount two cameras side by side. This is how the 3D movies are made today. It adds a lot of complexity to the scene and requires much advanced planning, but judging by the the quality of recent rash of 3D films, Hollywood has mastered 3D stereo. For computer graphics, the task is much easier because you simply create second virtual camera. Still requires more planning especially if it must sync with live action.
Once the two images are captured, I put them through a program called StereoPhotoMaker. It has a number of useful features including the ability to read and write multiple format types and the ability to align the two images. Next I usually go to another program called Stereomasken which allows interesting 3D cropping of the two images simultaneously. Both these highly functional codes are free.
*
I think my earliest venture into stereo was a photo I took of an SR-71 with a film camera. The nose boom almosted poked the viewer in the eye. The images were simply mounted in slides and dropped into a cheap plastic holder. Fortunately, my career, my interests and hobbies all complemented each other. As I explored the new world of computer graphics, my groundbreaking 2D images dissatisfied me. When the technology advanced to model 3D in the computer, I again quickly was dissatisfied with flat images of 3D virtual objects that I could not see and share in 3D. The simple dual image viewers I used at home helped bridge the gap long enough for me to figure out that I could stack two slide projectors each fitted with a polarizing filter borrowed from the photo department oriented 90 degrees to each other. With some cheap cross polarized glasses I had a way to project images that jumped off the screen before a crowd of people. The results were a big hit.
We were working with NASA on analyzing the flow in the space shuttle main engine, and I developed a three dimensional visualizer for the massive numerical results of the extensive computational fluid dynamics (CFD) codes. Because CFD was so computationally demanding, we would travel to the Bay area to NASA Ames to use their supercomputers at night. Someone at Ames saw some of the images I made and asked me to do a presentation to their team. Because I didn't know where the presentation would be and I wanted to do it in stereo, I lugged along my own projectors, polarizers and an old 4x4 foot screen with the metal coating curling at the edge. It turns out they had a beautiful new auditorium with a giant screen and the room which was mostly filled for the briefing. Because I didn't know the operation of their beautiful room I used my own kludged equipment. After the briefing, the director for the unit came up and announced to the room how impressed he was with the 3D presentation. He wondered why their attempts at stereo had failed. Turns out their big expensive glass beaded screen was depolarizing the light killing the stereographic effect. Luckily I had brought along my crummy metal coated antique screen, otherwise my big brief to NASA would have failed.
I also had an opportunity to brief another NASA independent advisory group using stereo on our design for the National Aerospace Plane propulsion system. My company was participating as an unfunded competitor meaning they were competing on their own nickel against fully government funded competitors, so it was important to do extremely well. In most companies on most competitions, they don't want to risk presenting material in innovative new ways lest something go wrong. All this came back to me as the group broke for lunch after my presentation and the head guy from Johns Hopkins held back and said "I have a bone to pick with you!" After I got my heart out of my throat, I asked what that was. He said "you always that that engine inlet just beyond the tips of my fingers". And he was right, I had an imaginary object floating in the middle of the room just beyond everyone's touch.
My use of stereo continued in both my work and my hobby culminating in the design and fielding of a state of the art development center for the famed Skunk Works which featured a close to 30 foot wide high resolution stereo screen. Digital mockups of entire vehicles could be made appear to float in the middle of the room, and more than once people close to the screen fell out of their chairs ducking a virtual object popping out of the screen right at them.
Books and courses abound on stereographics and the subject is far too broad and complex to even begin to cover here, but I hope I have given you some feel for how to proceed in this very special subset of photography and graphics.
*
*
There are many depth cues that humans merge into a perception of 3D. Stereo is one of them. All the stereo viewing techniques that I can think of involve some form of presenting one image to one eye and a view from a slight offset to the other. Earlier viewers including the View Masters, grandma's classic viewer, and cheap plastic viewers used physical separation of two images. Projections systems typically use polarizing filters and glasses with matching polarization to get the proper images to each eye. Modern projection systems for video rely on projecting alternate frames at high frequencies with viewing through glasses with shutters that alternate on and off synced to the projection system. Home flat panels were a hot item in 2012/13 but seem to fall off the face of the earth in 2014. They typically run at twice the normal frequency of regular flat panels to provide each eye with the same number of frames. They also use synced shuttered glasses for viewing.
There are other methods for stereo viewing. One is called anaglyphs. They use color filters rather than polarizing filters to separate the frames for each eye. This limits the ability to use the full color spectrum which is one reason I don't produce anaglyph images.
While all my images can be displayed in any of these systems, they all require hardware in the loop. I want people to be able to see stereo without glasses or other hardware whether on a web page or 2D print. In fact, there is a way to do this and I have used it in this web site so most of you can see my stereo images in 3D . It's fittingly called "Cross Eyed Stereo". The method is simple. The right eye image is positioned on the left, and the left eye is positioned on the right. You then slowly cross your eyes until they converge in a third image fully in 3D. It's not hard but it does feel a little unnatural at first. It does , however, give you the ability to view amazing 3D images without the burden of lugging around special equipment.
So that's how you can view stereo, but how do you create it. First there are special stereo cameras. There are also lens attachments that provide a split image. For stereo photography, I like simple so I don't like to carry around extra equipment. I just use one camera. If you remember you are trying to mimic the separation provided by human eyes, you will get the basic idea. You need separation of about the offset of the eyes and you need to angle in just a little like your eyes do to focus on a point in space. For stereo, I usually take one image with my weight over one foot and then shift to the other foot for the second shot. I also pick a point in the image and center one of the focus marks over it. When I shift to the other foot, I make sure the focus point marker stays on the same place in the screen. Some people actually mount two cameras side by side. This is how the 3D movies are made today. It adds a lot of complexity to the scene and requires much advanced planning, but judging by the the quality of recent rash of 3D films, Hollywood has mastered 3D stereo. For computer graphics, the task is much easier because you simply create second virtual camera. Still requires more planning especially if it must sync with live action.
Once the two images are captured, I put them through a program called StereoPhotoMaker. It has a number of useful features including the ability to read and write multiple format types and the ability to align the two images. Next I usually go to another program called Stereomasken which allows interesting 3D cropping of the two images simultaneously. Both these highly functional codes are free.
*
I think my earliest venture into stereo was a photo I took of an SR-71 with a film camera. The nose boom almosted poked the viewer in the eye. The images were simply mounted in slides and dropped into a cheap plastic holder. Fortunately, my career, my interests and hobbies all complemented each other. As I explored the new world of computer graphics, my groundbreaking 2D images dissatisfied me. When the technology advanced to model 3D in the computer, I again quickly was dissatisfied with flat images of 3D virtual objects that I could not see and share in 3D. The simple dual image viewers I used at home helped bridge the gap long enough for me to figure out that I could stack two slide projectors each fitted with a polarizing filter borrowed from the photo department oriented 90 degrees to each other. With some cheap cross polarized glasses I had a way to project images that jumped off the screen before a crowd of people. The results were a big hit.
We were working with NASA on analyzing the flow in the space shuttle main engine, and I developed a three dimensional visualizer for the massive numerical results of the extensive computational fluid dynamics (CFD) codes. Because CFD was so computationally demanding, we would travel to the Bay area to NASA Ames to use their supercomputers at night. Someone at Ames saw some of the images I made and asked me to do a presentation to their team. Because I didn't know where the presentation would be and I wanted to do it in stereo, I lugged along my own projectors, polarizers and an old 4x4 foot screen with the metal coating curling at the edge. It turns out they had a beautiful new auditorium with a giant screen and the room which was mostly filled for the briefing. Because I didn't know the operation of their beautiful room I used my own kludged equipment. After the briefing, the director for the unit came up and announced to the room how impressed he was with the 3D presentation. He wondered why their attempts at stereo had failed. Turns out their big expensive glass beaded screen was depolarizing the light killing the stereographic effect. Luckily I had brought along my crummy metal coated antique screen, otherwise my big brief to NASA would have failed.
I also had an opportunity to brief another NASA independent advisory group using stereo on our design for the National Aerospace Plane propulsion system. My company was participating as an unfunded competitor meaning they were competing on their own nickel against fully government funded competitors, so it was important to do extremely well. In most companies on most competitions, they don't want to risk presenting material in innovative new ways lest something go wrong. All this came back to me as the group broke for lunch after my presentation and the head guy from Johns Hopkins held back and said "I have a bone to pick with you!" After I got my heart out of my throat, I asked what that was. He said "you always that that engine inlet just beyond the tips of my fingers". And he was right, I had an imaginary object floating in the middle of the room just beyond everyone's touch.
My use of stereo continued in both my work and my hobby culminating in the design and fielding of a state of the art development center for the famed Skunk Works which featured a close to 30 foot wide high resolution stereo screen. Digital mockups of entire vehicles could be made appear to float in the middle of the room, and more than once people close to the screen fell out of their chairs ducking a virtual object popping out of the screen right at them.
Books and courses abound on stereographics and the subject is far too broad and complex to even begin to cover here, but I hope I have given you some feel for how to proceed in this very special subset of photography and graphics.
*
An example of a 3D stereo image formed from 2 separate photos taken with a single non-stereo camera.
This is an example of cross eyed stereo before I started using StereoMasken. The cut outs were done in Elements. Having the central object extend beyond background enhances the stereo effect.
This is an example of computer generated cross eyed stereo created in Mandelbulb 3D.
The beauty of flowers is enhanced by adding stereoscopic depth.
StereoMasken is essential in creating the complex cropping typical of fancy stereo images and in avoiding boundary violations.
The cropping in this image of the Ponte Vecchio in Florence, Italy requires the complex cropping available with StereoMasken.
Depth adds another dimension (literally) to quality photos.
Stereo expands this photo into the screen to add fuller appreciation of the beauty of this Rose Parade Float.
This Folded Wings Memorial in Burbank, California shows the power of stereo as the ground and shuttle come forward in front of the vertical walls of the memorial.
The complex terrain in this image requires complex cropping with StereoMasken to avoid boundary violations that cause headaches as the brain tries to process confliction information.
This image represents another way of creating stereo. Here I used Elements to take one flat image with transparency, duplicate it and shift it over a full frame plus. Next I took another duplicate image on a new layer scaled it and shifted it over a little. Then on one more layer I placed another scaled duplicate and offset it horizontally. The placement of the two top duplicates determines how deep the stereo effect will look. Finally placed the whole stack on a black background. I have also created a stereo image in Elements using multiple flat images with transparency. First sketched a 3D cube and then next to it created another similar cube as it would appear if viewed for a slightly different angle. The transparent images were then pasted into each cube with corners attached at corresponding positions on each of the cubes. The built up of images continued until a pleasing 3D image was obtained. Finally the alignment cubes were hidden to produce the final image.
Fractal Creation
Fractals are simply images created as visualizations of mathematical repetitions.
I saw a coffee table type book many years ago filled with full color images of fractals. (Unfortunately I can't remember the name or author 30 years later.) It wasn't just stunningly beautiful pictures but the visualization of complex mathematics. I was hooked and further enthralled by the fact that no matter how far you zoom in on the image, the detail remains reflecting variations on the original image.
There are many types of fractals and a quick Google search for fractal images will give you an immediate appreciation of their variations and beauty. My original fractals took hours to generate but with the great improvements in readily available computer power, there are online interactive fractal viewers. There are also many programs online to allow even the mathematically adverse create fractal art. I am currently using two fractal generation programs, Mandelbulb 3D and Apophysis. Both these programs amazingly enough are free and hats off to the programmers. I also use a third program that is a cross between fractals and digital art. It is called Fractalius and is a plug in to Photoshop or Elements. Mandelbulb 3D allows the direct creation of stereo images, and there is now a published work around to create stereo pairs from Apophysis but it is mainly oriented to 2D fractals. Mandelbulb 3D takes some experimentation to get the hang of the program, while the primarily 2D Apophysis allows the creation of interesting fractals immediately. There are also several good online tutorials for both which are a big help. Fractalius, on the other hand, is much harder to understand.
This is one of my first attempts in Apophysis. Not to brag, but I would hang it on my wall. I think this shows how the set up and the user interface on Apophysis makes quick, usable results possible. If you look closely you can see the image is made up of repeating squares and rectangular spirals. Apophysis provides a list of randomly generated fractals that you can pick or create your own. From there you adjust the colors and the basic mathematical variables to create your own custom image. You then select the resolution to render the final image.
This is another example of an Apophysis generated fractal framed in Elements.
This is another Apophysis example which shows a much different repetitive pattern.
This is yet another Apophysis variation.
A final image generated with Apophysis.
One of my first Mandelbulb 3D images created with stereo created with the push of a button. The stereo really allows you to see the rich details in the complex object.
Mandelbulb 3D permits the creation of some amazingly complex environments. I create most of my Mandelbulb 3D images in stereo because it is easy, but also because it is the only way to fully appreciate the detail and depth of the mathematical virtual worlds.
Part of the fun in using Mandelbulb 3D is searching for hidden gems like this where the mathematics create unexpected results.
Another example of a Mandelbulb generate image with an interesting string of diversion from the more repetitive structures.
While Mandelbulb 3D and Apophysis use mathematics to create images, Fractalius uses mathematics hidden behind the scenes to detect and emphasize patterns in existing photos. In this example, Fractalius has highlighted edges of this rose to make a very artsy, ethereal image.
In this example, more aggressive settings were used in Fractalius creating a much more abstract image. The original photo is simply a close up of a Christmas tree and its ornaments.
This Christmas window in Rothenburg, Germany was an ideal subject for Fractalius.
Fractalius works well on abstracting to the real essence of flowers.
Fractalius converted this close up of Marine drummers into an artistic abstract.
I snapped this photo off inside of Pike Place Market in Seattle. Unfortunately in the lower light. the image turned out just slightly out of focus. With Fractalius, I was able to rescue the photo by turning it into an abstract painting like image.
Here is a comparison of the application of Fractalius. The image on the left is the original. Not bad staring down a cheetah, but the Fractalius image on the right really has a dramatic pop.
Digital Painting
Digital Painting uses computer programs to manipulate photos to look like paintings. Certainly some of the Fractalius images above could fit in that category. I also use a program called Dynamic Auto Painter. Sometimes I use some of the features in Elements to get a painterly feel. If you are really artistically talented, a drawing tablet like those from Wacom allows digital painting without the mess and drying time with paints. So far I haven't made that leap yet.
Digital Painting uses computer programs to manipulate photos to look like paintings. Certainly some of the Fractalius images above could fit in that category. I also use a program called Dynamic Auto Painter. Sometimes I use some of the features in Elements to get a painterly feel. If you are really artistically talented, a drawing tablet like those from Wacom allows digital painting without the mess and drying time with paints. So far I haven't made that leap yet.
These images are all photos processed through Dynamic Auto Painter. DAP has multiple options for mimicking the styles of famous artists, mimicking actual paintings, and controlling brush sizes , areas of concentration, edge types, and paper types.
Panoramic Stitching
Panoramic Stitching creates wide field of view images from multiple photos. It could also include vertically stacked images as well as horizontal ones. I usually use two, three, or four photos, but have used many more at times. Even my old version of Elements has a powerful stitcher that matches the geometry of the separate images, but also adjusts the color match as well. I take enough overlapping shots to cover the area of interest. Under the NEW FILE tab in Elements there is an option for Panoramic images. The button opens a select box to pick the multiple images, then click and watch the magic happen.
Note that these images with their width and extremely high resolution are ideally viewed on a large, high res display, not on a web site.
It is interesting that some new cameras have a push button auto stitching panoramic feature built directly in the camera.
Panoramic Stitching creates wide field of view images from multiple photos. It could also include vertically stacked images as well as horizontal ones. I usually use two, three, or four photos, but have used many more at times. Even my old version of Elements has a powerful stitcher that matches the geometry of the separate images, but also adjusts the color match as well. I take enough overlapping shots to cover the area of interest. Under the NEW FILE tab in Elements there is an option for Panoramic images. The button opens a select box to pick the multiple images, then click and watch the magic happen.
Note that these images with their width and extremely high resolution are ideally viewed on a large, high res display, not on a web site.
It is interesting that some new cameras have a push button auto stitching panoramic feature built directly in the camera.
This image from Arches National Park is made of 5 horizontally stitched high res photos. It would be near impossible to capture this wide and undistorted image without stitching. With the resolution of the final image almost 5 time that of normal high res images, this type of image is ideal for a thirty foot wide screen display.
Air Force One at the Reagan Presidential Library is a unique attraction but at 128 feet long plus including the glass wall in front, it would be impossible to capture without stitching or using a wide angle lens with the associated distortion.
This image shows the need to make sure you capture the entire area even if its just empty sky. I managed to get a wide view of Rome including the Vatican on the left, the Pantheon and the Quadriga on the top of the Vittorio Emanuele II Monument, but missed the complex gradient on the sky.. Ideally the shot would have been balanced by having both statues framing the image but the sun is directly on the left. This is a typical problem with panoramic shots. The lighting conditions vary widely in these wide compilations. This image is made up of 13 individual hi res photos.
Stitching allowed me to capture the full humor of this Rose Parade float.
This image of the observation room and control center at JPL is another impossible shot without stitching!
This image of the flying wing at the Planes of Fame in Chino, California presented a real problem that stitching solved. The wing span is 60 feet yet the hangar is so small that there is only about four feet in front of the plane, not near enough back to get a good tip to tip shot.
To many good things to take in to capture with one shot. With stitching, one image captures the Coliseum, the Forum, Capitoline Hill, Palatine Hill , and the Circus maximus.
This is a stitched photo of the original Supreme Court Chambers in the US Capitol Building. It consists of 27 individual photos. Even with so many shots, a complete image would need several more to capture the full chamber.
This image of Getty Center is an example of the panoramic capability built into many current cameras and phones. No additional stitching is necessary.