Writing A Data Extraction To Web Page Program
-
In my area, there are few different law enforcement agencies that post real time data on car accidents. One is http://www.flhsmv.gov/fhp/traffic/crs_h501.htm. They post the accidents by county, and then in the location heading, they add the intersection and the city. For most of these counties and cities, our website, http://www.kempruge.com/personal-injury/auto-and-car-accidents/ has city and county specific pages. I need to figure out a way to pull the information from the FHP site and other real time crash sites so that it will automatically post on our pages. For example, if there's an accident in Hillsborough County on I-275 in Tampa, I'd like to have that immediately post on our "Hillsborough county car accident attorney" page and our "Tampa car accident attorney" page.
I want our pages to have something comparable to a stock ticker widget, but for car accidents specific to each pages location AND combines all the info from the various law enforcement agencies. Any thoughts on how to go about creating this?
As always, thank you all for taking time out of your work to assist me with whatever information or ideas you have. I really appreciate it.
-
-
Write a Perl program (or other language script) that will: a) read the target webpage, b) extract the data relevant for your geographic locations, c) write a small html file to your server that formats the data into a table that will fit on the webpage where you want it published.
-
Save that Perl program in your /cgi-bin/ folder. (you will need to change file permissions to allow the perl program to execute and the small html file to be overwritten)
-
Most servers allow you to execute files from your /cgi-bin/ on a schedule such as hourly or daily. These are usually called "cron jobs". Find this in your server's control panel. Set up a cron job that will execute your Perl program automatically.
-
Place a server-side include the size and shape of your data table on the webpage where you want the information to appear.
This set-up will work until the URL or format of the target webpage changes. Then your script will produce errors or write garbage. When that happens you will need to change the URL in the script and/or the format that it is read in.
-
-
You need to get a developer who understands a lot about http requests. You will need to have one that knows how to basically run a spidering program to ping the website and look for changes and scrape data off of those sites. You will also need to have the program check and see if the coding on the page changes, as if it does, then the scraping program will need to be re-written to account for this.
Ideally, those sites would have some sort of data API or XML feed etc to pull off of, but odds are they do not. It would be worth asking, as then the programming/programmer would have a much easier time. It looks like the site is using CMS software from http://www.cts-america.com/ - they may be the better group to talk to about this as you would potentially be interfacing with the software they develop vs some minion at the help desk for the dept of motor vehicles.
Good luck and please do produce a post here or a YouMoz post to show the finished product - it should be pretty cool!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Are slides how's etc the new Splash Pages?
[How did Moz know that my question was about this?!] I've just completed an audit of nearly 50 websites in the tourism industry and 90% had a slideshow, large image or video taking up more than the initial screen on the fairly large screened Chromebook that I'm using. I'm advising them all to ditch this and am often getting resistance from the site owners and their web developers. I know that these can be better optimized for page load speed, which is poor for most of these sites, especially on mobile devices; but from a usability standpoint, are these affective at drawing in users? Do users take the time to view these? Are they annoyed at always having to scroll down to see if there is anything else useful on the homepage? I think they are like the splash pages of the past: poor for usability and SEO. I've advised to at least make sure that the images are sized so the top of the page fits any screen (some of them do resize well for mobile devices, but maybe not laptops/desktops), include text with calls to action and click through to relevant content. I've been noting that they aren't media businesses selling images or videos, so they need to get their offerings to the top of the page so that users can see and engage more quickly. Anyone have any stats or experience on this? Thanks, Ann
Web Design | | anndonnelly0 -
How to fix non-crawlable pages affected by CSS modals?
I stumbled across something new when doing a site audit in SEMRUSH today ---> Modals. The case: Several pages could not be crawled because of (modal:) in the URL. What I know: "A modal is a dialog box/popup window that is displayed on top of the current page" based on CSS and JS. What I don't know: How to prevent crawlers from finding them.
Web Design | | Dan-Louis0 -
Do we still have this Page Rank / Link juice / Link equity? So this dilution concept?
Hi all, As per the traditional or standard SEO rules, we have this link juice and dilution concept. Many websites have changed their linking structure with this with the beleif "the more number of pages, the PR will get diluted". Then many websites avoided more number of pages from homepage to avoid link juice dilution. Even we followed same. But I just wonder it's still the same way Google handles websites and rankings as per the links. And many websites even avoid more number of 2nd tier/hierarchy pages to avoid link dilution. I have gone through our competitors where they been employing lot of top level pages like 2nd tier/hierarchy pages but still doing good at rankings. Please share your views and suggestions on this. Thanks
Web Design | | vtmoz0 -
No-index part of page
Hi All, I want to copy articles from CNN/Bloomberg/etc and I want to show the content to my users in Lightbox (CSS), but the problem is duplicate content. Do you have any idea how can I no-index part of page/content?
Web Design | | JohnPalmer0 -
Lots of Listing Pages with Thin Content on Real Estate Web Site-Best to Set them to No-Index?
Greetings Moz Community: As a commercial real estate broker in Manhattan I run a web site with over 600 pages. Basically the pages are organized in the following categories: 1. Neighborhoods (Example:http://www.nyc-officespace-leader.com/neighborhoods/midtown-manhattan) 25 PAGES Low bounce rate 2. Types of Space (Example:http://www.nyc-officespace-leader.com/commercial-space/loft-space)
Web Design | | Kingalan1
15 PAGES Low bounce rate. 3. Blog (Example:http://www.nyc-officespace-leader.com/blog/how-long-does-leasing-process-take
30 PAGES Medium/high bounce rate 4. Services (Example:http://www.nyc-officespace-leader.com/brokerage-services/relocate-to-new-office-space) High bounce rate
3 PAGES 5. About Us (Example:http://www.nyc-officespace-leader.com/about-us/what-we-do
4 PAGES High bounce rate 6. Listings (Example:http://www.nyc-officespace-leader.com/listings/305-fifth-avenue-office-suite-1340sf)
300 PAGES High bounce rate (65%), thin content 7. Buildings (Example:http://www.nyc-officespace-leader.com/928-broadway
300 PAGES Very high bounce rate (exceeding 75%) Most of the listing pages do not have more than 100 words. My SEO firm is advising me to set them "No-Index, Follow". They believe the thin content could be hurting me. Is this an acceptable strategy? I am concerned that when Google detects 300 pages set to "No-Follow" they could interpret this as the site seeking to hide something and penalize us. Also, the building pages have a low click thru rate. Would it make sense to set them to "No-Follow" as well? Basically, would it increase authority in Google's eyes if we set pages that have thin content and/or low click thru rates to "No-Follow"? Any harm in doing this for about half the pages on the site? I might add that while I don't suffer from any manual penalty volume has gone down substantially in the last month. We upgraded the site in early June and somehow 175 pages were submitted to Google that should not have been indexed. A removal request has been made for those pages. Prior to that we were hit by Panda in April 2012 with search volume dropping from about 7,000 per month to 3,000 per month. Volume had increased back to 4,500 by April this year only to start tanking again. It was down to 3,600 in June. About 30 toxic links were removed in late April and a disavow file was submitted with Google in late April for removal of links from 80 toxic domains. Thanks in advance for your responses!! Alan0 -
Has anyone added Structured Data Markup Server Side?
I want to add some structured data to our companies website via microdata through schema.org. I have been asked to gather all of the requirements so that it can be done server side and automated when things change. I honestly don't know where to begin as there are many areas where it can be added. Has anyone done this server side before?
Web Design | | Sika220 -
What is the difference of HTML5 and web 2.0? What is web 2.0 and is this better for seo?
A little bit confused with the new stuff. The web 2.0 webpages are so much better? What changes?
Web Design | | Naghirniac0 -
Two home pages?
One of my campaigns shows duplicate page content for domain xxx and xxx/index. There is only one index (home) page, so why does it report on two?
Web Design | | Beemer0