Proposed Projects


ID Name Period
GMU-17-01 Utilizing High Performance Computing to Detect the Relationship Between the Urban Heat Island and Land System Architecture at the Microscale

An urban heat island (UHI) is an urban area that is significantly warmer than its surrounding rural areas caused by human activities. UHI combines the results of all surface–atmosphere interactions and energy fluxes between the atmosphere and the ground, and closely linked to water, energy usage, and health-related consequences, including decreased quality of living conditions, and increased heat-related injuries and fatalities (Changnon et al., 1996; Patz et al., 2005). The prior studies have demonstrated the correlation between land system architecture and urban heat island based on the mediate or coarse spatial resolution data. However, these measurement scales may obscure stronger or different relations between land cover and land surface temperature (LST) because the mixture of land covers in coarse resolutions may hide the relations at finer resolutions where more urban land cover variability occurs (Zhou et al., 2011; Myint et al., 2013; Jenerette et al., 2016). Consequently, an evaluation of urban heat island at micro scales (e.g.
GMU-17-02 Deep Learning for Improving Severe Weather Detection and Anomaly Analysis

Severe weather, including dust storms, hurricanes, and thunderstorms, annually cause significant loss of life and property. The detection and forecast of severe weather events will have an immediate impact to society. Numerical simulations and earth observations have been largely improved in spatiotemporal resolution and coverage, so that scientists and researchers are able to better understand and forecast severe weather phenomena. However, it is challenging to obtain long-term climatology for different severe weather events and to accurately predict events by even the most state-of-the-art forecasting models due to the uncertainties of model forecasting. We propose a cloud-based, deep learning system to mine and learn severe weather events (e.g. dust storms, hurricanes, and thunderstorms) and their patterns, as well as anomaly detections from forecasting results. The deep learning system will be tested using three use cases: dust storm, hurricane, and thunderstorm, and it will help meteorologist better detect and understand the evolution patterns of severe weather events.
GMU-17-05 Micro-scale environmental data collection from moving sensors

This project aims to 1) Develop methods for real-time, micro-scale data collection with moving sensors; 2) Augmentation and update of existing data, and generation of new data, new geometries; 3) Improve accessibility of public space using data that is nearly universally needed but unavailable; 4) Spread methods, workflows, knowledge to IAB members.
GMU-17-06 Real-time message georeferencing for geocrowdsourced data integration

This project aims to 1) Explore, develop, and demonstrate the use of gazetteer-based geoparsing for generating footprints from text-based location descriptions; 2) Develop a library of spatial footprints (simple, complex); 3) Spatial footprint used for message mapping; 4) Spatial footprint for quality assessment in crowdsourced geospatial data.
UCSB-17-01 Siemens: Semantc Applicaton Logic Design for Subject Mater Experts

This project aim to design a semantc applicaton logic for subject mater experts. Four milstones are listed below: 1) Conceptualize and implement a framework and interface supportng the import and inclusion of SPIN rules and domain graphs. 2) Add logic validaton and executon capabilites to the workflow. 3) Develop export flters that will convert the logic to non-natve (RDF) executon formats, such as RIF or JSON. 4) Integrate and test components.
UCSB-17-02 Forecasting Future Urban Expansion in an African Secondary City, Douala, Cameroon: Transfer of Expertise in GIS and Land Use Change Modeling to Douala University

The goals for this project is 1) to bring visiting scholars from Douala University in Cameroon to a training session in the use of GIS and remote sensing to map land use and its changes, to map Douala’s built-up extent at multiple historical time periods; 2) to use the resulting data to create forecasts of long term urban growth and land use change in the region and 3) to promote informed and sustainable urban planning.

The project success will be measured in the number of people trained, the number of cities mapped and modeled, and the number of reports and papers created for use in planning and land management.

GMU-16-02 Cloud computing and big data management 2016-2017
GMU-16-03 Computing technology: SmartDrive 2016-2017
GMU-16-04 Health Mapping Incorporating Data Reliability 2016-2017
UCSB-16-01 Applications of High Accuracy and Precision Building Data for Urban Areas 2016-2017
UCSB-16-02 Urban Modeling in Uzbekistan 2016-2017
UCSB-16-03 an Open World Gazetteer 2016-2017
Harvard-16-01 HHyperMap 2016-2017
Harvard-16-02 Semantically enhanced workbench for responsive big data geoprocessing and visualization 2016-2017
Harvard-16-03 Exploring relationships between cancer vulnerability/resilience and emotional condition/environment from social media 2016-2017
GMU-15-02 Upgrade the Delivery of NASA Earth Observing System Data Products for Consumption by ArcGIS

The content and format of NASA EOS data products are defined by their respective Science Teams, stretching back over the past 25 years. Many of these data models are ancient are difficult to consume with other geospatial tools. Specifically, these tools are, in some cases, unable to read the files and/or unable to interpret properly the data organization inside them so they cannot be visualized or analyzed. A solution that can apply to all these data products across NASA data centers would be valuable. We propose a plug-in framework which is developed based on GDAL open source library to interpret the non-compliant data. The framework should have the advantages of extensibility within the EOSDIS allowing the multiple NASA data centers construct their own plug-ins to adjust their data products.
GMU-15-03 Analyzing Spatiotemporal Dynamics Using Place-Based Georeferencing

The human world is a world of places, where verbal description and narrative use placenames to describe occurrences, locations, and events. The geospatial, computational, and analytical rely instead on metric georeferencing to place these occurrences, locations, and events on a map. The gazetteer is the linkage between these two worlds, and the means for translating the human world into the computational world. With a new emphasis on social media and crowdsourcing in geospatial data production, Gazetteers and the associated techniques of geoparsing and georeferences are a critical element of an emerging geospatial toolkit. We use gazetteers to validate the contributions of crowdsourced event data contributed by end-users and look at placenaming as a validation tool within quality assessment for geocrowdsourced data. Strategies and best practices for generating and maintaining gazetteer databases for georeferencing crowdsourced data will be explored, determined, and presented.
GMU-15-04 Using Sonification to Analysis Spatiotemporal Dynamics in High-Dimensional Data

The human senses are paramount in constructing knowledge about the everyday world around us. The human sensory system is also a key to geospatial knowledge discovery, where patterns, trends, and outliers can be detected visually, and explored in more detail. As the complexity and size of geospatial datasets increase, the tools for geographic knowledge discovery need to expand. This research looks at the use of sonification and auditory display systems to expand the visualization toolkit. First, we use sonification as a way of simplifying the exploration of large, multidimensional data, including space-time data, where certain dimensions of data can be removed from the visual domain and represented efficiently with sound, leading to more effective geographic knowledge discovery. Second, we use sonification as a means of redundant display to reinforce cartographic and geospatial aspects of spatial-temporal display in low-vision environments.
GMU-15-05 A Cyberinfrastructure-based Disaster Management System using Social Media Data

During emergencies, it is of significance to deliver accurate and useful information to the impacted communities, and to assess damages to properties, people and the environment, in order to coordinate responses and recovery activities, including evacuations and relief operations. Novel information streams from social media are redefining situation awareness and can be used for damage assessment, humanitarian assistance and disaster relief operations. These streams are diverse, complex and overwhelming in volume, velocity and in the variety of viewpoints they offer. Negotiating these overwhelming streams is beyond the capacity of human analysts and an effective framework should be developed to mine and deliver disaster relevant information in a real-time fashion.
GMU-15-06 FloodNet: Demonstrating a Flood Monitoring Network Integrating Satellite, Sensor-Web and Social Media for the Protection of Life and Property

Flooding is the most costly natural disaster, striking with regularity, destroying property, agriculture, transportation, communication and lives. Floods impact developing countries profoundly, but developed nations are hardly immune with floods claiming thousands of lives every year. The threat is increasing as we build along riverbanks and flood plains, construct dykes and levees that channelize flow, and as climate change brings increased extreme weather events including floods.The first line defense for protection of life and property is flood monitoring. Knowledge of floods is truly power when issuing warnings, managing infrastructure, assessing damage, and planning for the future. Information about active floods can be gleaned from satellite sensors, ground stations and sensor-webs, and harvested from social media and citizen scientists. This information is complemented by flood hazard or risk maps, and weather and climate forecasts. These flood information elements exist separately, but would be much more effective at producing actionable flood knowledge if integrated into a seamless flood monitoring network.Therefore, we propose to demonstrate a flood monitoring network that integrates flood information from satellites, sensor-webs, social media, risk maps, and weather/climate forecasts into a user-focused visualization interface (such as GIS or Google-Earth) that enables the production of actionable flood knowledge (FloodNet). We will largely focus on networking existing flood information elements available from government agencies, harvested from social media, and produced by satellite sensors. The demonstration will be performed in a historical context, focused on a few well-known recent flood events in the Mid-Atlantic region, with a vision for global real-time implementation. We will take advantage of recent advances in cloud computing, visualization tools, and spatial-temporal knowledge toolboxes in the implementation of FloodNet.The resulting flood monitoring network will guide civil protection officials, insurers and citizens as to current flood hazards and future flooding risks.
GMU-15-07 Benchmarking Timely Decision Support and Integrating Multi-Source Spatiotemporal Environmental Datasets

In the past decade, natural disasters have become more frequent. It is widely recognized that the increasing complexity of environmental problems at local, regional, and global scales need to be attacked by integrated approaches. Explosive growth in spatiotemporal data and emergence of social media make it possible and also emphasize the need for developing new and computationally efficient geospatial analytics tailored for analyzing big data. This project aims to provide decision support for life and property with maximum accuracy and minimum human intervention by leveraging near-real time integration of government satellite and model assets using HPC, virtual computing and storage environments, OGC standard protocols. Additionally, we are going to benchmark latency and science validity of end-to-end (E2E) solutions using machine-to-machine (M2M) interfaces to exploit NOAA, USGS, NASA environmental data from satellites, forecast models and social media data to generate more accurate and timely decision support information.
UCSB-15-02 Assessment and Applications of High Accuracy and Precision building data for urban areas

The company Solar Census has developed an unprecedented means by which high resolution (10cm) stereo overhead imagery is processed photogrammetrically to extraordinary levels of accuracy, and then models are applied that orthorectify and extract building footprints and roofs with unprecedented fidelity. Test acquisitions of new imagery have been supported by the Department of Energy for test areas in northern California, and new data are forthcoming for the entire state, and for the State of New York. Solar Census has an application that solves the solar equation across building roofs for identifying optimal locations for the placement of photovoltaic electric panels to generate distributed solar power. The purposes of the collaboration between Solar Census and the UCSB Geography site for the I/UCRC Center for Spatiotemporal Thinking, Computing and Applications are twofold: 1) complete an accuracy assessment to quantify the vertical and horizontal accuracy of the new data; and 2) explore innovative potential new applications of the data that could present new revenue streams and business opportunities for the data, which could potentially be available nation-wide.
Harvard-15-01 A Training-by-Crowdsourcing Approach for Place Name Extraction from Large Volumes of Scanned Maps

We propose to develop a training-by-crowdsourcing approach for automatic extraction of place names in large volumes of georeferenced scanned maps. Place names very often exist only in paper maps and have potential use both for adding semantic content and for providing search and indexing capabilities to the original scanned maps. Moreover place names can be used to strengthen existing gazetteers (place name databases), which are the foundation to support effective geotagging or georeferencing of many document and media types. The proposed solution will provide a map text extraction service and web map client interface that accesses the service. The extraction service will consume raw map images from standard WMSs, and output spatiotemporally labeled place names. The client will allow users to curate (i.e., update, delete, insert, and edit) extraction results and share the results with other users. The user curation process will be recorded and sent to the extraction service to train the underlying map processing algorithms for handling map areas where no user training has yet been done.
Harvard-15-02 Building an Open Source, Real-Time, Billion Object Spatio-Temporal Exploration Platform

There is currently no general purpose platform to support interactive queries and geospatial visualizations against datasets containing even a few million features against which queries return more than ten thousand records. To begin to address this fundamental lack of public infrastructure, we will design and build an open source platform to support search and visualization against a billion spatio-temporal features. The instance will be loaded with the latest billion geotweets (tweets which contain GPS coordinates from the originating device), and which the CGA has been harvesting since 2012. The system will run on commodity hardware and well known software. It will support queries by time, space, keyword, user name, and operating system. The platform will be capable of returning responses to complex queries in less than 2 seconds. Spatial heatmaps will be used to represent the distribution of results returned at any scale, for any number of features. Temporal histograms will be used to represent the distribution over time of results returned at any scale. The system will be capable of generating kernal density visualizations from massive collections of point measurements such as weather, pollution, or other sensor streams.
Harvard-15-03 Addressing the Search Problem for Geospatial Data

We are currently engaged in building a general purpose, open source, global registry of map service layers on servers across the web. The registry will be made available for search via a public API for anyone to use to find and bind to map layers from within any application. We are developing a basic UI that will integrate with WorldMap (open source general purpose map collaboration platform) and make registry content discoverable by time, space, and keyword, and savable and sharable online. The system will allow users to visualize the geographic distribution of search results regardless of number of layer returned by rendering heatmaps of overlapping layer footprints. All assets in the system will be map layers that can used immediately within WorldMap or within any other web or desktop mapping client. Uptime and usage statistics will be maintained for all resources and these will be used to continually improve search. Core elements of this project are currently funded by a grant from the National Endowment for the Humanities, but there are important aspects which are not supported. For example, the grant focuses on OGC and Esri image services, though there exist many other spatial assets in need of organization, including feature services, processing services, shapefiles, KML/KMZ, and other raster and vector formats. There are also important types of metadata we are not handling. We have developed basic tools for crawling the web using Hadoop and a pipeline to harvest and load results to a fast registry, but there are many ways both crawl and harvest can be improved.
Harvard-15-04 HyperMap: An Ontology-driven Platform for Interactive Big Data Geo-analytics

Sensing technology and the digital traces of human activity are providing us with ever larger spatiotemporally referenced data streams. Computing and automated analysis advances are at the same time decreasing the effort of drawing knowledge from such large data volumes. Still, there is a gap between the ability to run large batch-type data processing tasks and the interactive engagement with analysis that characterizes most research. There appear to be three principal scales (in both volume and task time) of processing tasks: asynchronous summarization of billions of records by space-time and other relevant dimensions, synchronous analysis of the summary data using statistical / functional models, and interactive visual interpretation of model results. The forward workflow is becoming more and more common, but feedback from interpretation to refine the larger-scale process steps is still most often a logistical nightmare. We propose to develop a platform that flexibly links the three stages of geo-analysis using a provenance-orchestration ontology and OGC service interface standards such as Web Processing Service (WPS). The purpose of the platform will be to provide domain experts the tools to explore - iteratively and interactively - extremely large datasets such as the CGA geo-tweet corpus without spending most of there time in performing system engineering. Researchers will be able to leverage a semantic description of an analysis workflow to drill back from interesting visual insights to the details of processing and then trigger process refinements by updating the workflow description instead of having to re-write processing codes and scripts. The HyperMap platform is envisioned to support several approaches to big data summarization. Initial design targets include factorization of unstructured data such as geo-tweets, classification of coverages, and recognition of imagery feature hierarchies.
Harvard-15-05 Terrain and Hydrology Processing of High Resolution Elevation Data Sets

Raster data sets representing elevation are being released at increasingly high resolutions. The National Elevation Dataset (NED) has gone from 30m to 10m and is now available in many states at 3m resolution. At the local and state level, LIDAR-based elevation data is available for many locations, particularly coastal areas and those subject to flooding. As horizontal resolution improves, vertical resolution and accuracy are also improving, but while higher resolution is improving the ability to leverage these data sets for modeling hydrological flow, visibility, slope and other data processing operations, the exponentially larger size of the data sets is presenting significant data processing challenges, even with professional workstation GIS tools. Under this proposal, the project team will develop and implement new algorithms for performing parallel data processing on large raster data sets. The work will leverage the open source Apache Spark and GeoTrellis projects, both based on the Scala functional programming language. It will also take advantage of other open source efforts supporting data processing at scale, including the Hadoop Distributed File System (HDFS) and indexing tools such as Cassandra and Accumulo. The results of the work will be released under a business-friendly Apache2 license, and will be aimed at supporting execution of large elevation data processing operations on clusters of virtual machines. Specific processing operations may include: viewshed, flow accumulation, flow direction, watershed delineation, sink, slope, aspect, and profiling operations. The proposed work will be synergistic with other proposed research projects, including the HyperMap effort to classify terrain types and channel areas based on large, high resolutions elevation data sets.
Harvard-15-06 Feature Classification Using Terrain and Imagery Data at Scale

Drones, micro-satellites, and other innovations promise to both lower the cost and rapidly increase the amount of available raster imagery data. Initial use of this imagery is currently focused on supporting visualization of geospatial data. However, there is substantial opportunity to provide the ability to extract features from the imagery using simple user interfaces. Feature classification from raster imagery is not a new capability, and it is supported by several commercial workstation products. In addition, contemporary techniques rely not only on the imagery itself, but also leverage elevation data to improve the accuracy of the feature classification. However, the ability to do so with large data sets through a simple browser-based user interface is a significant challenge. Under this proposal, the project team will develop and implement a prototype web-based software tool that will be able to use a combination of elevation and imagery data to enable users to extract vector polygon features with real-time processing speeds. The work will leverage the open source Apache Spark and GeoTrellis projects, both based on the Scala functional programming language. It will also take advantage of other open source efforts supporting data processing at scale, including the Hadoop Distributed File System (HDFS) and indexing tools such as Cassandra and Accumulo. The results of the work will be released under a business-friendly Apache2 license, and will be aimed at supporting execution of large data processing operations on clusters of virtual machines. The proposed work will be synergistic with other proposed research projects, including the HyperMap project to classify terrain types and channel areas based on large, high resolutions elevation data sets and the Place Name extraction from historic maps project.
GMU-14-04 Developing a spatiotemporal cloud advisory system for better selecting cloud services

We propose a web-based cloud advising system to (1) integrate heterogeneous cloud information from different providers, (2) automatically retrieve update-to-date cloud information, (3) recommend and evaluate cloud solutions according to users' selection preferences.
GMU-14-02 Developing an open spatiotemporal analytics platform for big event data

We propose to design a visual analytical platform to systematically perform inductive pattern analysis on real-time volunteered event data. The platform will be built based on tools and methods that were developed by us in previous studies. The accomplished platform will not only enable spatiotemporal pattern exploration of big event data in the short term, but also lay the concrete foundation for using volunteered data for tasks such as urban planning in the long term.
GMU-14-06 Incorporating quality information to support spatiotemporal data and service exploration 2014-2015
Harvard-14-01 Temporal gazetteer for geotemporal information retrieval

Place names are a key part of geographic understanding and carry a full sense of changing perspective over time, but existing gazetteers do not in general represent the temporal dimension. This project will develop, populate, and implement services for a place name model that incorporates realistic complexity in the temporal, spatial, and language elements that form a place name. Additional tools will be developed to conflate and reconcile place name evidence from authoritative, documentary, and social sources.
Harvard-14-04 Cartographic ontology for semantic map annotation

Map annotation produces highly-relevant, high-value information, whose utility however often critically depends on semantic interoperability. Achieving that requires an ontology-based, semantic web, linked open data approach. We will develop a key missing ingredient, the cartographic annotation ontology, to characterize the complex structures and rich visual, symbolic, and geospatial languages that maps use to represent geographic information.
Harvard-14-05 A Paleo-Event Ontology for Characterizing and Publishing Irish Historical and Fossil Climate Data

Integration of both Big and Little spatio-temporal data from different scientific domains is vital for validating climate models, as a single volcanic eruption, for example, can have a great effect. Yet observation of deep time events, without deep-time observers, means we must discern paleo-events through observation of fossilized, event-proxy, features. Using medieval monastic records, tree-ring data, ice core features and volcanic eruption phenomena to inform our efforts, we will develop a deep-time climate event observation ontology to characterize the nature and relationships of the data.
Harvard-14-06 Emotional City – measuring, analyzing and visualizing citizens’ emotions for urban planning in smart cities

Emotional City contributes provides a human-centered approach for extracting contextual emotional information from technical and human sensor data. The methodology used in this project consists of four steps: 1) detecting emotions using wristband sensors, 2) “ground-truthing” these measurements using a People as Sensors smartphone app, 3) extracting emotion information from crowdsourced data like Twitter, 4) correlating the measured and extracted emotions. Finally, the emotion information is mapped and fed back into urban management for decision support and for evaluating ongoing planning processes.
UCSB-14-01 Dismounted navigationIndoor Mapping Using Multi-Sensor Point Clouds 2014-2015
UCSB-14-02 Indoor Mapping Using Multi-Sensor Point Clouds

Develop and evaluate method for creating 3D indoor maps using point clouds generated by multiple sensor platforms.
UCSB-14-03 Pattern driven Exploratory Interaction with Big Geo Data 2014-2015