Sunday, May 9, 2010

How can collective information contribute to a "bottom-up" approach in identifying a site/brief?






Preamble:

Formerly, an approach to the design process involved having both the program and site as predetermined criteria. We are given a site to work on and the function that it is to be, and if we weren't given those we tell ourselves firstly that we want to build an X on a place Y. We then examine everything that place Y has on offer for a building X, or anything surprising that building X can offer to Y. Either case, we have made the prejudgment that building X is the best thing that could go on place Y or vice versa. We practice as a god-mode architect who says that "i want to build this building that would fit onto this site, believe me it would." 

What if we leave either the site or the program as indeterminate factors? What if we know what we want to build, but lets leave the site out to the jury, especially if we are looking at a large scale building X that would have a significant impact to site Y. Rather than holding copious amounts of "community interaction sessions" with a few places the designers have in mind, what if we can tap into the vast and varying quantities of related information on the internet and translate those into a brief or identify a site?

Introduction:






“Web environments can be pictured as data bases that can be provided as a central service or can be built from the bottom up in decentralised fashion. To an extent this reflects our division between designers and users with central systems having designers in distinctly different roles from users.”

“The extent to which users and/or designers can create derivative products from the data no matter how it is created is part of the functionality of the system. This can range from entirely preconceived ways of manipulating the data in the search for patterns or networks to loose sets of rules that users and designers can invoke in creating searches for new kinds of patterns that are not predetermined.”

Hudson-Smith Et.al. Mapping for the Masses: Accessing Web2.0 through Crowdsourcing. UCL Working Paper Series 143 (8). London: CASA

This experiment follows on from the initial project proposal of an live/work/play office precinct situated at Civic Place, Parramatta. The experiment would test the proposal of Parramatta as the best suited place for commerce oriented redevelopment by combining current market data related to "commerce oriented development." Three sets of information, (the employment market conditions, the commercial real estate market conditions, and the housing real estate conditions) have been scrutinized in order to arrive at a place for commercial oriented development at the LGA scale. In preference to real-time and flexible information, the experiment shyed away from "official data" arrived by sources such as census data or RPData as they tended to be skewed, outdated, or generalised.


  


Assumptions/Challenges:

High expectations of the data turning in favor of Parramatta LGA was anticipated as it is heavily backed by current metropolitan strategies as a key performance area for commercial redevelopment. It was also expected that market data would be easily obtainable in legible forms, that is, the data would immediately present us with a title, a figure of density and a time factor. These data were assumed to be obtainable in RSS format.
The outcome was that a significant portion of these data were skewed to highlight some areas, for example, premium listings where sellers paid extra to duplicate their advertisement were evident in expensive areas such as Sydney and North Sydney. To maintain the information asymmetry between buyers and real estate agents, many of the data were not available on RSS feeds, and the methodology was amended in order to strip out website data and reconvert them into useful RSS feeds.

Methodology:

Website information from various employment and real estate agencies were fed into Yahoo Pipes. A technique known as "mashup" was used to unpack and recombine the data into relevant formats, namely the location the intensity and a brief textual or pictorial description. These were then converted into RSS feeds which were able to be cut and organised into a spreadsheet. The spreadsheet was then translated into ArcGIS for analysis. 

An examination into the general descriptive content of each LGA area when converted into RSS was done to determine the specific market climate for each area. For example, whether the area was compact and centralising where the advertisements showed an abundance of new, compact properties or whether the area was dispersed, decentralised where the advertisement showed an abundance of detached homes, old warehouses, etc.











Results:

The data was recombined and evaluated. Parramatta LGA was third as the most suited place to build an office complex. After Sydney City, the experiment identified Blacktown as a more suitable candidate for an office complex based on economic criteria. The experiment was affected by the quality of the data by the way that the broadcasters tended to hide information on pricing and size (residential), and also by the way that the data is deliberately skewed to make certain items more prominent than others.   
    









No comments:

Post a Comment