Sunday, March 13, 2016
Working with PhotoScan in the cloud
Back in 2012 when I first started flying drones to make high-resolution photomaps (e.g., strapping a first-generation GoPro to the bottom of a balsa-wood DIY drone and hoping for the best), there were few options for processing the photos.
Basically, if you didn't have access to $2,000 software, you only had Microsoft Image Composite Editor (ICE) to stitch together the photos into mosaics. Fortunately, much has changed since then.
In a window of just two years, a number of software solutions became available. VisualFSM brought free, open-source photogrammetry to tech-savvy hobbyists and researchers. There was Autodesk's 123D catch, which could be used with drone imagery in a pinch. Pix4D came about in 2011, which later gained a huge market share in the professional UAS space. I won't get into all the options, but there's a fairly comprehensive table on Wikipedia that you might wish to look at.
The solution I use most often today is Agisoft PhotoScan. The feature set of the standard version is somewhat limited compared to solutions designed specifically for UAS use, but it's also easy to use, the software license is comparatively cheap, and it runs on ordinary desktop machines.
Many photogrammetry services are run in the cloud (123D catch, Pix4D, DroneMapper), which has its benefits. You don't have to upgrade your machine to run complex models. You don't have to tie up a computer for hours while it's processing 500-1,000 photos. You can start a job in another country, send your images to the cloud instance, and by the time you arrive back home, your job can be done.
But processing in the cloud can mean paying fees by the month or by the job. If you like paying a one-time fee for a license, the cloud may not be the most attractive solution.
Thankfully, PhotoScan can be run in the cloud. While it does mean incurring hourly fees for computer time and cloud storage, it can also help in a pinch when you're working on an especially large project.
Tags:
3D modeling
,
Amazon
,
AWS
,
drones
,
EC2
,
photogrammetry
,
PhotoScan
Saturday, February 27, 2016
Scraping crowd-sourced shake reports to produce a cumulative shake map for Oklahoma earthquakes
Last year, Oklahoma had more earthquakes than ever
before. In 2015, the Oklahoma Geological Survey (OGS) counted 5,691 earthquakes[1] centered in the state. That’s
270 more quakes than what Oklahoma experienced in 2014.
Along with more reports of earthquakes, came more
reports of earthquake damage[2]. In one of the worst
earthquake swarms in 2015, a chimney was torn from a house in Edmond[3], and an exterior wall of
bricks came tumbling down from an apartment complex in northeast Oklahoma City[4]. Much has been learned since
Oklahoma's earthquake surge began in 2009.
Scientists now link these earthquakes to the injection of waste water into deep disposal wells[5]. Water exists naturally in the earth along with oil and gas deposits, and when the oil and gas is drawn from the earth, the water comes with it. This water is separated from the oil and gas, and is disposed of in deep wells. Because these quakes are caused by human activities, they are known as “induced earthquakes.”[6]
Many questions remain, however. Namely, what are the
long-term effects of having so many small earthquakes so frequently? And how is
it possible to compare the impact of these quakes across Oklahoma? The United
States Geological Survey produces damage estimates automatically after
significant earthquakes, but it does not produce damage estimates for swarms of
smaller earthquakes, which may last for months or years.
Tags:
big data
,
CartoDB
,
cost
,
cumulative shaking
,
damage
,
data
,
data science
,
datavis
,
earthquake
,
fracking
,
impact
,
induced earthquakes
,
intensity
,
magnitude
,
mapping
,
Oklahoma
,
Python
,
R
,
visualization
Thursday, November 5, 2015
Climbing a virtual mountain, Cesium, and other big life moments
Much has happened since my last update, but I'll keep it simple. Last August, my lovely wife finished her awesome dissertation on the roots and development of toxic discourse in science fiction. With her PhD and job offer in hand, we left the University of Illinois for Oklahoma City (Urbana we will always love you).
Initially, I kept working remotely for the National Science Foundation grant, EnLiST, continuing the analysis of our teaching and learning network data and helping UIUC faculty plan for future grants. Eventually I did hit the job market, and became the instructional technologist for the Center for Learning and Teaching at Oklahoma City Community College.
What does that mean? Basically, it means making sure the college stays up-to-date with technological change. Some days this means helping faculty with changes in our learning management system (LMS). Other days, it means building 3D models, visualizations, and applications for learning (such as this digital mountain).
Tags:
Cesium
,
CesiumJS
,
css
,
Elk Mountain
,
GIS
,
javascript
,
mapping
,
nature
,
Oklahoma
Subscribe to:
Posts
(
Atom
)