Friday 30 December 2016

Data Mining - Retrieving Information From Data

Data Mining - Retrieving Information From Data

Data mining definition is the process of retrieving information from data. It has become very important now days because data that is processed is usually kept for future reference and mainly for security purposes in a company. Data transforms is processed into information and it is mostly used in different ways depending on what information one is extracting and from where the person is extracting the information.

It is commonly used in marketing, scientific information and research work, fraud detection and surveillance and many more and most of this work is done using a computer. This definition can come in different terms data snooping, data fishing and data dredging all this refer to data mining but it depends in which department one is. One must know data mining definition so that he can be in a position to make data.

The method of data mining has been there for so many centuries and it is used up to date. There were early methods which were used to identify data mining there are mainly two: regression analysis and bayes theorem. These methods are never used now days because a lot of people have advanced and technology has really changed the entire system.

With the coming up or with the introduction of computers and technology, it becomes very fast and easy to save information. Computers have made work easier and one can be able to expand more knowledge about data crawling and learn on how data is stored and processed through computer science.

Computer science is a course that sharpens one skill and expands more about data crawling and the definition of what data mining means. By studying computer science one can be in a position to know: clustering, support vector machines and decision trees there are some of the units that are found on computer science.

It's all about all this and this knowledge must be applied here. Government institutions, small scale business and supermarkets use data.

The main reason most companies use data mining is because data assist in the collection of information and observations that a company goes through in their daily activity. Such information is very vital in any companies profile and needs to be checked and updated for future reference just in case something happens.

Businesses which use data crawling focus mainly on return of investments, and they are able to know whether they are making a profit or a loss within a very short period. If the company or the business is making a profit they can be in a position to give customers an offer on the product in which they are selling so that the business can be a position to make more profit in an organization, this is very vital in human resource departments it helps in identifying the character traits of a person in terms of job performance.

Most people who use this method believe that is ethically neutral. The way it is being used nowadays raises a lot of questions about security and privacy of its members. Data mining needs good data preparation which can be in a position to uncover different types of information especially those that require privacy.

A very common way in this occurs is through data aggregation.

Data aggregation is when information is retrieved from different sources and is usually put together so that one can be in a position to be analyze one by one and this helps information to be very secure. So if one is collecting data it is vital for one to know the following:

    How will one use the data that he is collecting?
    Who will mine the data and use the data.
    Is the data very secure when am out can someone come and access it.
    How can one update the data when information is needed
    If the computer crashes do I have any backup somewhere.

It is important for one to be very careful with documents which deal with company's personal information so that information cannot easily be manipulated.

source : http://ezinearticles.com/?Data-Mining---Retrieving-Information-From-Data&id=5054887

Monday 26 December 2016

One of the Main Differences Between Statistical Analysis and Data Mining

One of the Main Differences Between Statistical Analysis and Data Mining

Two methods of analyzing data that are common in both academic and commercial fields are statistical analysis and data mining. While statistical analysis has a long scientific history, data mining is a more recent method of data analysis that has arisen from Computer Science. In this article I want to give an introduction to these methods and outline what I believe is one of the main differences between the two fields of analysis.

Statistical analysis commonly involves an analyst formulating a hypothesis and then testing the validity of this hypothesis by running statistical tests on data that may have been collected for the purpose. For example, if an analyst was studying the relationship between income level and the ability to get a loan, the analyst may hypothesis that there will be a correlation between income level and the amount of credit someone may qualify for.

The analyst could then test this hypothesis with the use of a data set that contains a number of people along with their income levels and the credit available to them. A test could be run that indicates for example that there may be a high degree of confidence that there is indeed a correlation between income and available credit. The main point here is that the analyst has formulated a hypothesis and then used a statistical test along with a data set to provide evidence in support or against that hypothesis.

Data mining is another area of data analysis that has arisen more recently from computer science that has a number of differences to traditional statistical analysis. Firstly, many data mining techniques are designed to be applied to very large data sets, while statistical analysis techniques are often designed to form evidence in support or against a hypothesis from a more limited set of data.

Probably the mist significant difference here, however, is that data mining techniques are not used so much to form confidence in a hypothesis, but rather extract unknown relationships may be present in the data set. This is probably best illustrated with an example. Rather than in the above case where a statistician may form a hypothesis between income levels and an applicants ability to get a loan, in data mining, there is not typically an initial hypothesis. A data mining analyst may have a large data set on loans that have been given to people along with demographic information of these people such as their income level, their age, any existing debts they have and if they have ever defaulted on a loan before.

A data mining technique may then search through this large data set and extract a previously unknown relationship between income levels, peoples existing debt and their ability to get a loan.

While there are quite a few differences between statistical analysis and data mining, I believe this difference is at the heart of the issue. A lot of statistical analysis is about analyzing data to either form confidence for or against a stated hypothesis while data mining is often more about applying an algorithm to a data set to extract previously unforeseen relationships.

Source:http://ezinearticles.com/?One-of-the-Main-Differences-Between-Statistical-Analysis-and-Data-Mining&id=4578250

Thursday 15 December 2016

Web Data Extraction Services

Web Data Extraction Services

Web Data Extraction from Dynamic Pages includes some of the services that may be acquired through outsourcing. It is possible to siphon information from proven websites through the use of Data Scrapping software. The information is applicable in many areas in business. It is possible to get such solutions as data collection, screen scrapping, email extractor and Web Data Mining services among others from companies providing websites such as Scrappingexpert.com.

Data mining is common as far as outsourcing business is concerned. Many companies are outsource data mining services and companies dealing with these services can earn a lot of money, especially in the growing business regarding outsourcing and general internet business. With web data extraction, you will pull data in a structured organized format. The source of the information will even be from an unstructured or semi-structured source.

In addition, it is possible to pull data which has originally been presented in a variety of formats including PDF, HTML, and test among others. The web data extraction service therefore, provides a diversity regarding the source of information. Large scale organizations have used data extraction services where they get large amounts of data on a daily basis. It is possible for you to get high accuracy of information in an efficient manner and it is also affordable.

Web data extraction services are important when it comes to collection of data and web-based information on the internet. Data collection services are very important as far as consumer research is concerned. Research is turning out to be a very vital thing among companies today. There is need for companies to adopt various strategies that will lead to fast means of data extraction, efficient extraction of data, as well as use of organized formats and flexibility.

In addition, people will prefer software that provides flexibility as far as application is concerned. In addition, there is software that can be customized according to the needs of customers, and these will play an important role in fulfilling diverse customer needs. Companies selling the particular software therefore, need to provide such features that provide excellent customer experience.

It is possible for companies to extract emails and other communications from certain sources as far as they are valid email messages. This will be done without incurring any duplicates. You will extract emails and messages from a variety of formats for the web pages, including HTML files, text files and other formats. It is possible to carry these services in a fast reliable and in an optimal output and hence, the software providing such capability is in high demand. It can help businesses and companies quickly search contacts for the people to be sent email messages.

It is also possible to use software to sort large amount of data and extract information, in an activity termed as data mining. This way, the company will realize reduced costs and saving of time and increasing return on investment. In this practice, the company will carry out Meta data extraction, scanning data, and others as well.

Source: http://ezinearticles.com/?Web-Data-Extraction-Services&id=4733722

Friday 9 December 2016

Data Mining vs Screen-Scraping

Data Mining vs Screen-Scraping

Data mining isn't screen-scraping. I know that some people in the room may disagree with that statement, but they're actually two almost completely different concepts.

In a nutshell, you might state it this way: screen-scraping allows you to get information, where data mining allows you to analyze information. That's a pretty big simplification, so I'll elaborate a bit.

The term "screen-scraping" comes from the old mainframe terminal days where people worked on computers with green and black screens containing only text. Screen-scraping was used to extract characters from the screens so that they could be analyzed. Fast-forwarding to the web world of today, screen-scraping now most commonly refers to extracting information from web sites. That is, computer programs can "crawl" or "spider" through web sites, pulling out data. People often do this to build things like comparison shopping engines, archive web pages, or simply download text to a spreadsheet so that it can be filtered and analyzed.

Data mining, on the other hand, is defined by Wikipedia as the "practice of automatically searching large stores of data for patterns." In other words, you already have the data, and you're now analyzing it to learn useful things about it. Data mining often involves lots of complex algorithms based on statistical methods. It has nothing to do with how you got the data in the first place. In data mining you only care about analyzing what's already there.

The difficulty is that people who don't know the term "screen-scraping" will try Googling for anything that resembles it. We include a number of these terms on our web site to help such folks; for example, we created pages entitled Text Data Mining, Automated Data Collection, Web Site Data Extraction, and even Web Site Ripper (I suppose "scraping" is sort of like "ripping"). So it presents a bit of a problem-we don't necessarily want to perpetuate a misconception (i.e., screen-scraping = data mining), but we also have to use terminology that people will actually use.

Source: http://ezinearticles.com/?Data-Mining-vs-Screen-Scraping&id=146813

Monday 5 December 2016

Three Common Methods For Web Data Extraction

Three Common Methods For Web Data Extraction

Probably the most common technique used traditionally to extract data from web pages this is to cook up some regular expressions that match the pieces you want (e.g., URL's and link titles). Our screen-scraper software actually started out as an application written in Perl for this very reason. In addition to regular expressions, you might also use some code written in something like Java or Active Server Pages to parse out larger chunks of text. Using raw regular expressions to pull out the data can be a little intimidating to the uninitiated, and can get a bit messy when a script contains a lot of them. At the same time, if you're already familiar with regular expressions, and your scraping project is relatively small, they can be a great solution.

Other techniques for getting the data out can get very sophisticated as algorithms that make use of artificial intelligence and such are applied to the page. Some programs will actually analyze the semantic content of an HTML page, then intelligently pull out the pieces that are of interest. Still other approaches deal with developing "ontologies", or hierarchical vocabularies intended to represent the content domain.

There are a number of companies (including our own) that offer commercial applications specifically intended to do screen-scraping. The applications vary quite a bit, but for medium to large-sized projects they're often a good solution. Each one will have its own learning curve, so you should plan on taking time to learn the ins and outs of a new application. Especially if you plan on doing a fair amount of screen-scraping it's probably a good idea to at least shop around for a screen-scraping application, as it will likely save you time and money in the long run.

So what's the best approach to data extraction? It really depends on what your needs are, and what resources you have at your disposal. Here are some of the pros and cons of the various approaches, as well as suggestions on when you might use each one:

Raw regular expressions and code

Advantages:

- If you're already familiar with regular expressions and at least one programming language, this can be a quick solution.
- Regular expressions allow for a fair amount of "fuzziness" in the matching such that minor changes to the content won't break them.
- You likely don't need to learn any new languages or tools (again, assuming you're already familiar with regular expressions and a programming language).
- Regular expressions are supported in almost all modern programming languages. Heck, even VBScript has a regular expression engine. It's also nice because the various regular expression implementations don't vary too significantly in their syntax.

Disadvantages:

- They can be complex for those that don't have a lot of experience with them. Learning regular expressions isn't like going from Perl to Java. It's more like going from Perl to XSLT, where you have to wrap your mind around a completely different way of viewing the problem.
- They're often confusing to analyze. Take a look through some of the regular expressions people have created to match something as simple as an email address and you'll see what I mean.
- If the content you're trying to match changes (e.g., they change the web page by adding a new "font" tag) you'll likely need to update your regular expressions to account for the change.
- The data discovery portion of the process (traversing various web pages to get to the page containing the data you want) will still need to be handled, and can get fairly complex if you need to deal with cookies and such.

When to use this approach: You'll most likely use straight regular expressions in screen-scraping when you have a small job you want to get done quickly. Especially if you already know regular expressions, there's no sense in getting into other tools if all you need to do is pull some news headlines off of a site.

Ontologies and artificial intelligence

Advantages:

- You create it once and it can more or less extract the data from any page within the content domain you're targeting.
- The data model is generally built in. For example, if you're extracting data about cars from web sites the extraction engine already knows what the make, model, and price are, so it can easily map them to existing data structures (e.g., insert the data into the correct locations in your database).
- There is relatively little long-term maintenance required. As web sites change you likely will need to do very little to your extraction engine in order to account for the changes.

Disadvantages:

- It's relatively complex to create and work with such an engine. The level of expertise required to even understand an extraction engine that uses artificial intelligence and ontologies is much higher than what is required to deal with regular expressions.
- These types of engines are expensive to build. There are commercial offerings that will give you the basis for doing this type of data extraction, but you still need to configure them to work with the specific content domain you're targeting.
- You still have to deal with the data discovery portion of the process, which may not fit as well with this approach (meaning you may have to create an entirely separate engine to handle data discovery). Data discovery is the process of crawling web sites such that you arrive at the pages where you want to extract data.

When to use this approach: Typically you'll only get into ontologies and artificial intelligence when you're planning on extracting information from a very large number of sources. It also makes sense to do this when the data you're trying to extract is in a very unstructured format (e.g., newspaper classified ads). In cases where the data is very structured (meaning there are clear labels identifying the various data fields), it may make more sense to go with regular expressions or a screen-scraping application.

Screen-scraping software

Advantages:

- Abstracts most of the complicated stuff away. You can do some pretty sophisticated things in most screen-scraping applications without knowing anything about regular expressions, HTTP, or cookies.
- Dramatically reduces the amount of time required to set up a site to be scraped. Once you learn a particular screen-scraping application the amount of time it requires to scrape sites vs. other methods is significantly lowered.
- Support from a commercial company. If you run into trouble while using a commercial screen-scraping application, chances are there are support forums and help lines where you can get assistance.

Disadvantages:

- The learning curve. Each screen-scraping application has its own way of going about things. This may imply learning a new scripting language in addition to familiarizing yourself with how the core application works.
- A potential cost. Most ready-to-go screen-scraping applications are commercial, so you'll likely be paying in dollars as well as time for this solution.
- A proprietary approach. Any time you use a proprietary application to solve a computing problem (and proprietary is obviously a matter of degree) you're locking yourself into using that approach. This may or may not be a big deal, but you should at least consider how well the application you're using will integrate with other software applications you currently have. For example, once the screen-scraping application has extracted the data how easy is it for you to get to that data from your own code?

When to use this approach: Screen-scraping applications vary widely in their ease-of-use, price, and suitability to tackle a broad range of scenarios. Chances are, though, that if you don't mind paying a bit, you can save yourself a significant amount of time by using one. If you're doing a quick scrape of a single page you can use just about any language with regular expressions. If you want to extract data from hundreds of web sites that are all formatted differently you're probably better off investing in a complex system that uses ontologies and/or artificial intelligence. For just about everything else, though, you may want to consider investing in an application specifically designed for screen-scraping.

As an aside, I thought I should also mention a recent project we've been involved with that has actually required a hybrid approach of two of the aforementioned methods. We're currently working on a project that deals with extracting newspaper classified ads. The data in classifieds is about as unstructured as you can get. For example, in a real estate ad the term "number of bedrooms" can be written about 25 different ways. The data extraction portion of the process is one that lends itself well to an ontologies-based approach, which is what we've done. However, we still had to handle the data discovery portion. We decided to use screen-scraper for that, and it's handling it just great. The basic process is that screen-scraper traverses the various pages of the site, pulling out raw chunks of data that constitute the classified ads. These ads then get passed to code we've written that uses ontologies in order to extract out the individual pieces we're after. Once the data has been extracted we then insert it into a database.

source: http://ezinearticles.com/?Three-Common-Methods-For-Web-Data-Extraction&id=165416

Wednesday 30 November 2016

Assuring Scraping Success with Proxy Data Scraping

Assuring Scraping Success with Proxy Data Scraping


Have you ever heard of "Data Scraping?" Data Scraping is the process of collecting useful data that has been placed in

the public domain of the internet (private areas too if conditions are met) and storing it in databases or spreadsheets

for later use in various applications. Data Scraping technology is not new and many a successful businessman has

made his fortune by taking advantage of data scraping technology.

Sometimes website owners may not derive much pleasure from automated harvesting of their data. Webmasters

have learned to disallow web scrapers access to their websites by using tools or methods that block certain ip

addresses from retrieving website content. Data scrapers are left with the choice to either target a different website,

or to move the harvesting script from computer to computer using a different IP address each time and extract as

much data as possible until all of the scraper's computers are eventually blocked.

Thankfully there is a modern solution to this problem. Proxy Data Scraping technology solves the problem by using

proxy IP addresses. Every time your data scraping program executes an extraction from a website, the website thinks

it is coming from a different IP address. To the website owner, proxy data scraping simply looks like a short period of

increased traffic from all around the world. They have very limited and tedious ways of blocking such a script but

more importantly -- most of the time, they simply won't know they are being scraped.

You may now be asking yourself, "Where can I get Proxy Data Scraping Technology for my project?" The "do-it-

yourself" solution is, rather unfortunately, not simple at all. Setting up a proxy data scraping network takes a lot of

time and requires that you either own a bunch of IP addresses and suitable servers to be used as proxies, not to

mention the IT guru you need to get everything configured properly. You could consider renting proxy servers from

select hosting providers, but that option tends to be quite pricey but arguably better than the alternative: dangerous

and unreliable (but free) public proxy servers.

There are literally thousands of free proxy servers located around the globe that are simple enough to use. The trick

however is finding them. Many sites list hundreds of servers, but locating one that is working, open, and supports the

type of protocols you need can be a lesson in persistence, trial, and error. However if you do succeed in discovering a

pool of working public proxies, there are still inherent dangers of using them. First off, you don't know who the server

belongs to or what activities are going on elsewhere on the server. Sending sensitive requests or data through a public

proxy is a bad idea. It is fairly easy for a proxy server to capture any information you send through it or that it sends

back to you. If you choose the public proxy method, make sure you never send any transaction through that might

compromise you or anyone else in case disreputable people are made aware of the data.

A less risky scenario for proxy data scraping is to rent a rotating proxy connection that cycles through a large number

of private IP addresses. There are several of these companies available that claim to delete all web traffic logs which

allows you to anonymously harvest the web with minimal threat of reprisal. Companies such as offer large scale

anonymous proxy solutions, but often carry a fairly hefty setup fee to get you going.

Source:http://ezinearticles.com/?Assuring-Scraping-Success-with-Proxy-Data-Scraping&id=248993

Assuring Scraping Success with Proxy Data Scraping

Assuring Scraping Success with Proxy Data Scraping


Have you ever heard of "Data Scraping?" Data Scraping is the process of collecting useful data that has been placed in

the public domain of the internet (private areas too if conditions are met) and storing it in databases or spreadsheets

for later use in various applications. Data Scraping technology is not new and many a successful businessman has

made his fortune by taking advantage of data scraping technology.

Sometimes website owners may not derive much pleasure from automated harvesting of their data. Webmasters

have learned to disallow web scrapers access to their websites by using tools or methods that block certain ip

addresses from retrieving website content. Data scrapers are left with the choice to either target a different website,

or to move the harvesting script from computer to computer using a different IP address each time and extract as

much data as possible until all of the scraper's computers are eventually blocked.

Thankfully there is a modern solution to this problem. Proxy Data Scraping technology solves the problem by using

proxy IP addresses. Every time your data scraping program executes an extraction from a website, the website thinks

it is coming from a different IP address. To the website owner, proxy data scraping simply looks like a short period of

increased traffic from all around the world. They have very limited and tedious ways of blocking such a script but

more importantly -- most of the time, they simply won't know they are being scraped.

You may now be asking yourself, "Where can I get Proxy Data Scraping Technology for my project?" The "do-it-

yourself" solution is, rather unfortunately, not simple at all. Setting up a proxy data scraping network takes a lot of

time and requires that you either own a bunch of IP addresses and suitable servers to be used as proxies, not to

mention the IT guru you need to get everything configured properly. You could consider renting proxy servers from

select hosting providers, but that option tends to be quite pricey but arguably better than the alternative: dangerous

and unreliable (but free) public proxy servers.

There are literally thousands of free proxy servers located around the globe that are simple enough to use. The trick

however is finding them. Many sites list hundreds of servers, but locating one that is working, open, and supports the

type of protocols you need can be a lesson in persistence, trial, and error. However if you do succeed in discovering a

pool of working public proxies, there are still inherent dangers of using them. First off, you don't know who the server

belongs to or what activities are going on elsewhere on the server. Sending sensitive requests or data through a public

proxy is a bad idea. It is fairly easy for a proxy server to capture any information you send through it or that it sends

back to you. If you choose the public proxy method, make sure you never send any transaction through that might

compromise you or anyone else in case disreputable people are made aware of the data.

A less risky scenario for proxy data scraping is to rent a rotating proxy connection that cycles through a large number

of private IP addresses. There are several of these companies available that claim to delete all web traffic logs which

allows you to anonymously harvest the web with minimal threat of reprisal. Companies such as offer large scale

anonymous proxy solutions, but often carry a fairly hefty setup fee to get you going.

Source:http://ezinearticles.com/?Assuring-Scraping-Success-with-Proxy-Data-Scraping&id=248993

Monday 28 November 2016

How Xpath Plays Vital Role In Web Scraping Part 2

How Xpath Plays Vital Role In Web Scraping Part 2

Here is a piece of content on  Xpaths which is the follow up of How Xpath Plays Vital Role In Web Scraping

Let’s dive into a real-world example of scraping amazon website for getting information about deals of the day. Deals of the day in amazon can be found at this URL. So navigate to the amazon (deals of the day) in Firefox and find the XPath selectors. Right click on the deal you like and select “Inspect Element with Firebug”:

If you observe the image below keenly, there you can find the source of the image(deal) and the name of the deal in src, alt attribute’s respectively.

So now let’s write a generic XPath which gathers the name and image source of the product(deal).

  //img[@role=”img”]/@src  ## for image source
  //img[@role=”img”]/@alt   ## for product name

In this post, I’ll show you some tips we found valuable when using XPath in the trenches.

If you have an interest in Python and web scraping, you may have already played with the nice requests library to get the content of pages from the Web. Maybe you have toyed around using Scrapy selector or lxml to make the content extraction easier. Well, now I’m going to show you some tips I found valuable when using XPath in the trenches and we are going to use both lxml and Scrapy selector for HTML parsing.

Avoid using expressions which contains(.//text(), ‘search text’) in your XPath conditions. Use contains(., ‘search text’) instead.

Here is why: the expression .//text() yields a collection of text elements — a node-set(collection of nodes).and when a node-set is converted to a string, which happens when it is passed as argument to a string function like contains() or starts-with(), results in the text for the first element only.

from scrapy import Selector
html_code = “””<a href=”#”>Click here to go to the <strong>Next Page</strong></a>”””
sel = Selector(text=html_code)
xp = lambda x: sel.xpath(x).extract()           # Let’s type this only once
print xp(‘//a//text()’)                                       # Take a peek at the node-set
[u’Click here to go to the ‘, u’Next Page’]   # output of above command
print xp(‘string(//a//text())’)                           # convert it to a string
  [u’Click here to go to the ‘]                           # output of the above command

Let’s do the above one by using lxml then you can implement XPath by both lxml or Scrapy selector as XPath expression is same for both methods.

lxml code:

from lxml import html
html_code = “””<a href=”#”>Click here to go to the <strong>Next Page</strong></a>””” # Parse the text into a tree
parsed_body = html.fromstring(html_code)  # Perform xpaths on the tree
print parsed_body(‘//a//text()’)                      # take a peek at the node-set
[u’Click here to go to the ‘, u’Next Page’]   # output
print parsed_body(‘string(//a//text())’)              # convert it to a string
[u’Click here to go to the ‘]                    # output

A node converted to a string, however, puts together the text of itself plus of all its descendants:

>>> xp(‘//a[1]’)  # selects the first a node
[u'<a href=”#”>Click here to go to the <strong>Next Page</strong></a>’]

>>> xp(‘string(//a[1])’)  # converts it to string
[u’Click here to go to the Next Page’]

Beware of the difference between //node[1] and (//node)[1]//node[1] selects all the nodes occurring first under their respective parents and (//node)[1] selects all the nodes in the document, and then gets only the first of them.

from scrapy import Selector

html_code = “””<ul class=”list”>
<li>1</li>
<li>2</li>
<li>3</li>
</ul>

<ul class=”list”>
<li>4</li>
<li>5</li>
<li>6</li>
</ul>”””

sel = Selector(text=html_code)
xp = lambda x: sel.xpath(x).extract()

xp(“//li[1]”) # get all first LI elements under whatever it is its parent

[u'<li>1</li>’, u'<li>4</li>’]

xp(“(//li)[1]”) # get the first LI element in the whole document

[u'<li>1</li>’]

xp(“//ul/li[1]”)  # get all first LI elements under an UL parent

[u'<li>1</li>’, u'<li>4</li>’]

xp(“(//ul/li)[1]”) # get the first LI element under an UL parent in the document

[u'<li>1</li>’]

Also,

//a[starts-with(@href, ‘#’)][1] gets a collection of the local anchors that occur first under their respective parents and (//a[starts-with(@href, ‘#’)])[1] gets the first local anchor in the document.

When selecting by class, be as specific as necessary.

If you want to select elements by a CSS class, the XPath way to do the same job is the rather verbose:

*[contains(concat(‘ ‘, normalize-space(@class), ‘ ‘), ‘ someclass ‘)]

Let’s cook up some examples:

>>> sel = Selector(text='<p class=”content-author”>Someone</p><p class=”content text-wrap”>Some content</p>’)

>>> xp = lambda x: sel.xpath(x).extract()

BAD: because there are multiple classes in the attribute

>>> xp(“//*[@class=’content’]”)

[]

BAD: gets more content than we need

 >>> xp(“//*[contains(@class,’content’)]”)

     [u'<p class=”content-author”>Someone</p>’,
     u'<p class=”content text-wrap”>Some content</p>’]

GOOD:

>>> xp(“//*[contains(concat(‘ ‘, normalize-space(@class), ‘ ‘), ‘ content ‘)]”)
[u'<p class=”content text-wrap”>Some content</p>’]

And many times, you can just use a CSS selector instead, and even combine the two of them if needed:

ALSO GOOD:

>>> sel.css(“.content”).extract()
[u'<p class=”content text-wrap”>Some content</p>’]

>>> sel.css(‘.content’).xpath(‘@class’).extract()
[u’content text-wrap’]

Learn to use all the different axes.

It is handy to know how to use the axes, you can follow through these examples.

In particular, you should note that following and following-sibling are not the same thing, this is a common source of confusion. The same goes for preceding and preceding-sibling, and also ancestor and parent.

Useful trick to get text content

Here is another XPath trick that you may use to get the interesting text contents: 

//*[not(self::script or self::style)]/text()[normalize-space(.)]

This excludes the content from the script and style tags and also skip whitespace-only text nodes.

Tools & Libraries Used:

Firefox
Firefox inspect element with firebug
Scrapy : 1.1.1
Python : 2.7.12
Requests : 2.11.0

 Have questions? Comment below. Please share if you found this helpful.

Source: http://blog.datahut.co/how-xpath-plays-vital-role-in-web-scraping-part-2/

Friday 11 November 2016

Tapping The Mining Services Goldmine

Tapping The Mining Services Goldmine

In Australia, resources booms tend to come and go. In a recent speech, Reserve Bank Deputy Governor Ric Battellino identified five major booms over the last two hundred years - from the gold rush of the 1850s, to our current minerals and energy boom.

Many have argued that the current boom is different from anything we've experienced before, with the modernisation of the Chinese and Indian economies likely to keep demand high for decades. That's led some analysts to talk of a resources supercycle. And yet a supercycle is still a cycle.

By definition, cycles are uneven, with commodity prices ebbing and flowing in response to demand, economic conditions and market sentiment. And the share prices of resources companies tend to move with them.

Which raises the question: what's the best way for investors to tap into the potential of the mining boom, without the heart-stopping volatility that mining stocks sometimes deliver?
Invest in the store that sells the spade

Legend has it that the people who really profited from Australia's gold rush weren't the miners who flocked to the fields, but the store-owners who sold them their spades and pans. You can put the same principle to work today by investing in mining services and engineering companies.

Here are five reasons to consider giving mining services companies a place in your portfolio:

1. Growing demand


In November, the Australian Bureau of Agricultural and Resource Economics reported that mining and energy companies plan to invest a record $132.9bn in new projects, a 58% increase from the previous year. That includes 72 projects at an advanced stage of development, such as the $43bn Gorgon LNG project and the $20bn Olympic dam expansion. The mining services sector is poised to benefit from all of them.

The sector also stands to benefit from Australia's worsening skills shortage, with more companies looking to contractors to provide essential services in remote locations.

2. Less volatility


Resource stocks tend to fluctuate with commodity prices, which are subject to international economic forces and market sentiment beyond the control of any individual company. As a result, they are among the most volatile companies on the Australian sharemarket. But mining services stocks, while still exposed to the commodities cycle, tend to be more stable.

3. More predictable cash flow


One reason for the comparative volatility of commodity companies is that their cash flow can be very variable. In the development phase, they need to make significant capital expenditure, often leading to negative cash flows. And while they enjoy healthy revenues in the production phase, that revenue may diminish as a resource is exhausted, unless they make further investments in exploration and development.
In contrast, mining services companies require comparatively little capital investment, with more predictable cash flows over the long-term.

4. Higher dividends

Predictable cash flows and lower capital expenditures often allow services companies to pay out more of their earnings as dividends, making them more appealing for income-oriented investors.

5. No need to pick winners

Many miners are highly leveraged to demand for a single commodity, whether it's gold, coal, copper or iron ore. Some are reliant on a single mine or field. Whereas services companies generally have a more diversified customer base.

Source: http://ezinearticles.com/?Tapping-The-Mining-Services-Goldmine&id=5924837

Thursday 22 September 2016

Things to take care while doing Web Scraping!!!

Things to take care while doing Web Scraping!!!

In the present day and age, web scraping word becomes most popular in data science. Basically web scraping is extracting the information from the websites using pre-written programs and web scraping scripts. Many organizations have successfully used web site scraping to build relevant and useful database that they use on a daily basis to enhance their business interests. This is the age of the Big Data and web scraping is one of the trending techniques in the data science.

Throughout my journey of learning web scraping and implementing many successful scraping projects, I have come across some great experiences we can learn from.  In this post, I’m going to discuss some of the approaches to take and approaches to avoid while executing web scraping.

User Proxies: Anonymously scraping data from websites

One should not scrape website with a single IP Address. Because when you repeatedly request the web page for web scraping, there is a chance that the remote web server might block your IP address preventing further request to the web page. To overcome this situation, one should scrape websites with the help of proxy servers (anonymous scraping). This will minimize the risk of getting trapped and blacklisted by a website. Use of Proxies to hide your identity (network details) to remote web servers while scraping data. You may also use a VPN instead of proxies to anonymously scrape websites.

Take maximum data and store it.

Do not follow “process the web page as it comes from the remote server”. Instead take all the information and store it to disk. This approach will be useful when your scraping algorithm breaks in the middle. In this case you don’t have to start scraping again. Never download the same content more than once as you are just wasting bandwidth. Try and download all content to disk in one go and then do the processing.

Follow strict rules in parsing:

Check various rules while parsing the information from the web site. For example if you expect a value to be a date then check that it’s really a date. This may greatly improve the quality of information. When you get unexpected data, then the algorithm need to be changed accordingly.

Respect Robots.txt

Robots.txt specifies the set of rules that should be followed by web crawlers and robots. I strongly advise you to consider and adjust your crawler to fully respect robots.txt. Robots.txt contains instructions on the exact pages that you are allowed to crawl, user-agent, and the requisite intervals between page requests. Following to these instructions minimizes the chance of getting blacklisted and banned from website owner.

Use XPath Smartly

XPath is a nice option to select elements of the HTML document more flexibly than CSS Selectors.  Be careful about HTML structure change through page to page so one xpath you made may be failed to extract data on another page due to changes in HTML structure.

Obey Website TOC:

Some websites make it absolutely apparent in their terms and conditions that they are particularly against to web scraping activities on their content. This can make you vulnerable against possible ethical and legal implications.

Test sample scrape and verify the data with actual scrape

Once you are done with web scraping project set up, you need to test it for sometimes. Check the extracted data. If something is not good, find out the cause and make changes accordingly and finally come to a perfect web scraping project.

Source: http://webdata-scraping.com/things-take-care-web-scraping/

Thursday 1 September 2016

How Web Scraping can Help you Detect Weak spots in your Business

How Web Scraping can Help you Detect Weak spots in your Business

Business intelligence is not a new term. Businesses have always been employing experts for analysing the progress, market and industry trends to keep their growth graph going up. Now that we have big data and the tool to gather this data – Web scraping, business intelligence has become even more fruitful. In fact, business intelligence has become a necessary thing to survive now that the competition is fierce in every industry. This is the reason why most enterprises depend on web scraping solutions to gather the data relevant to their businesses. This data is highly insightful and dependable enough to make critical business decisions. Business intelligence from web scraping is definitely a game changer for companies as it can supply relevant and actionable data with minimal effort.

Most businesses have weak spots that are being overlooked or hidden from the plain sight. These weak spots, if left unnoticed can gradually result in the downfall of your company. Here is how you can use data acquired through web scraping to detect weak spots in your business and strengthen them.

Competitor analysis

Many a times, you can find out the flaws in your business by keeping a close watch on your competitors. Competitor analysis is something that we owe to web scraping as the level of competitive intelligence that you can derive from web scraping has never been achievable in the past. With crawling forums and social media sites where your target audience is, you can easily find out if your competitor is leveraging something you have overlooked. Competitor analysis is all about staying updated to each and every action by your competitors, so that you can always be prepared for their next strategic move. If your competitors are doing better than you, this data can be used to make a comparison between your business and theirs which would give you insights on where you lack.

Brand monitoring on Social media

With social media platforms acting like platforms where businesses and customers can interact with each other, the data available on these sites are increasingly becoming relevant to businesses. Any issues in your business operations will also reflect on your customer sentiments. Social media is a goldmine of sentiment data that can help you detect issues within your company. By analysing the posts that mention your brand or product on social media sites, you can identify what department of your company is functioning well and what isn’t.

For example, if you are an Ecommerce portal and many users are complaining about delivery issues from your company on social media, you might want to switch to a better logistics partner who does a better job. The ability to identify such issues at the earliest is extremely important and that’s where web scraping becomes a life saver. With social media scraping, monitoring your brand on social media is easy like never before and the chances of minor issues escalating to bigger ones is almost non-existent. Brand monitoring is extremely crucial if you are a business operating in the online space. Social media scraping solutions are provided by many leading web scraping companies, which totally eliminates the technical complications associated with the process for you.

Finding untapped opportunities

There are always new and untapped markets and opportunities that are relevant to your business. Finding them is not going to be an easy task with manual and outdated methods of research. Web scraping can fill this gap and help you find opportunities that your company can make use of to leverage your reach and progress. Sometimes, targeting the right audience makes all the difference that you’ve been trying to make. By using web crawling to find mentions of your relevant keywords on the web, you can easily stay updated on your niche and fill in to any new untapped markets. Web crawling for keywords is better explained in our previous blog.

Bottom line

It is not a cakewalk to stay ahead in the competition considering how competitive every industry has become in this digital age. It is crucial to find the weak spots and untapped opportunities of your business before someone else does. Of course, you can always use some help from the technology when you need it. Web scraping is clearly the best way to find and gather data that would help you figure these out. With web crawling solutions that can completely take care of this niche process, nothing is stopping you from using the data and insights that the web has in stock for your business.

Source: https://www.promptcloud.com/blog/web-scraping-detect-weak-spots-business

Wednesday 24 August 2016

How Web Data Extraction Services Will Save Your Time and Money by Automatic Data Collection

How Web Data Extraction Services Will Save Your Time and Money by Automatic Data Collection

Data scrape is the process of extracting data from web by using software program from proven website only. Extracted data any one can use for any purposes as per the desires in various industries as the web having every important data of the world. We provide best of the web data extracting software. We have the expertise and one of kind knowledge in web data extraction, image scrapping, screen scrapping, email extract services, data mining, web grabbing.

Who can use Data Scraping Services?

Data scraping and extraction services can be used by any organization, company, or any firm who would like to have a data from particular industry, data of targeted customer, particular company, or anything which is available on net like data of email id, website name, search term or anything which is available on web. Most of time a marketing company like to use data scraping and data extraction services to do marketing for a particular product in certain industry and to reach the targeted customer for example if X company like to contact a restaurant of California city, so our software can extract the data of restaurant of California city and a marketing company can use this data to market their restaurant kind of product. MLM and Network marketing company also use data extraction and data scrapping services to to find a new customer by extracting data of certain prospective customer and can contact customer by telephone, sending a postcard, email marketing, and this way they build their huge network and build large group for their own product and company.

We helped many companies to find particular data as per their need for example.

Web Data Extraction

Web pages are built using text-based mark-up languages (HTML and XHTML), and frequently contain a wealth of useful data in text form. However, most web pages are designed for human end-users and not for ease of automated use. Because of this, tool kits that scrape web content were created. A web scraper is an API to extract data from a web site. We help you to create a kind of API which helps you to scrape data as per your need. We provide quality and affordable web Data Extraction application

Data Collection

Normally, data transfer between programs is accomplished using info structures suited for automated processing by computers, not people. Such interchange formats and protocols are typically rigidly structured, well-documented, easily parsed, and keep ambiguity to a minimum. Very often, these transmissions are not human-readable at all. That's why the key element that distinguishes data scraping from regular parsing is that the output being scraped was intended for display to an end-user.

Email Extractor

A tool which helps you to extract the email ids from any reliable sources automatically that is called a email extractor. It basically services the function of collecting business contacts from various web pages, HTML files, text files or any other format without duplicates email ids.

Screen scrapping

Screen scraping referred to the practice of reading text information from a computer display terminal's screen and collecting visual data from a source, instead of parsing data as in web scraping.

Data Mining Services

Data Mining Services is the process of extracting patterns from information. Datamining is becoming an increasingly important tool to transform the data into information. Any format including MS excels, CSV, HTML and many such formats according to your requirements.

Web spider

A Web spider is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion. Many sites, in particular search engines, use spidering as a means of providing up-to-date data.

Web Grabber

Web grabber is just a other name of the data scraping or data extraction.

Web Bot

Web Bot is software program that is claimed to be able to predict future events by tracking keywords entered on the Internet. Web bot software is the best program to pull out articles, blog, relevant website content and many such website related data We have worked with many clients for data extracting, data scrapping and data mining they are really happy with our services we provide very quality services and make your work data work very easy and automatic.

Source: http://ezinearticles.com/?How-Web-Data-Extraction-Services-Will-Save-Your-Time-and-Money-by-Automatic-Data-Collection&id=5159023