Skip to content Skip to sidebar Skip to footer

Pandas.io.common.cparsererror: Error Tokenizing Data. C Error: Buffer Overflow Caught - Possible Malformed Input File

I have large csv files with size more than 10 mb each and about 50+ such files. These inputs have more than 25 columns and more than 50K rows. All these have same headers and I am

Solution 1:

If I understand your problem, you have large csv files with the same structure that you want to merge into one big CSV file.

My suggestion is to use dask from Continuum Analytics to handle this job. You can merge your files but also perform out-of-core computations and analysis of the data just like pandas.

### make sure you include the [complete] tag
pip install dask[complete]

Solution Using Your Sample Data from DropBox

First, check versions of dask. For me, dask = 0.11.0 and pandas = 0.18.1

import dask
import pandas as pd
print (dask.__version__)
print (pd.__version__)

Here's the code to read in ALL your csvs. I had no errors using your DropBox example data.

import dask.dataframe as dd
from dask.delayed import delayed
import dask.bag as db
import glob

filenames = glob.glob('/Users/linwood/Downloads/stack_bundle/rio*.csv')

'''
The key to getting around the CParse error was using sep=None
Came from this post
http://stackoverflow.com/questions/37505577/cparsererror-error-tokenizing-data
'''# custom saver function for dataframes using newfilenamesdefreader(filename):
    return pd.read_csv(filename,sep=None)

# build list of delayed pandas csv reads; then read in as dask dataframe

dfs = [delayed(reader)(fn) for fn in filenames]
df = dd.from_delayed(dfs)


'''
This is the final step.  The .compute() code below turns the 
dask dataframe into a single pandas dataframe with all your
files merged. If you don't need to write the merged file to
disk, I'd skip this step and do all the analysis in 
dask. Get a subset of the data you want and save that.  
'''
df = df.reset_index().compute()
df.to_csv('./test.csv')

The rest of this is extra stuff

# print the count of values in each column; perfect data would have the same count# you have dirty data as the counts will showprint (df.count().compute())

The next step is doing some pandas-like analysis. Here is some code of me first "cleaning" your data for the 'tweetFavoriteCt' column. All of the data is not an integer, so I replace strings with "0" and convert everything else to an integer. Once I get the integer conversion, I show a simple analytic where I filter the entire dataframe to only include the rows where the favoriteCt is greater than 3

# function to convert numbers to integer and replace string with 0; sample analytics in dask dataframe# you can come up with your own..this is just for an exampledefconversion(value):
    try:
        returnint(value)
    except:
        returnint(0)

# apply the function to the column, create a new column of cleaned data
clean = df['tweetFavoriteCt'].apply(lambda x: (conversion(x)),meta=('stuff',str))

# set new column equal to our cleaning code above; your data is dirty :-(
df['cleanedFavoriteCt'] = clean

Last bit of code shows dask analysis and how to load this merged file into pandas and also write the merged file to disk. Be warned, if you have tons of CSVs, when you use the .compute() code below, it will load this merged csv into memory.

# retreive the 50 tweets with the highest favorite count print(df.nlargest(50,['cleanedFavoriteCt']).compute())

# only show me the tweets that have been favorited at least 3 times# TweetID 763525237166268416, is VERRRRY popular....7000+ favoritesprint((df[df.cleanedFavoriteCt.apply(lambda x: x>3,meta=('stuff',str))]).compute())

'''
This is the final step.  The .compute() code below turns the 
dask dataframe into a single pandas dataframe with all your
files merged. If you don't need to write the merged file to
disk, I'd skip this step and do all the analysis in 
dask. Get a subset of the data you want and save that.  
'''
df = df.reset_index().compute()
df.to_csv('./test.csv')

Now, if you want to switch to pandas for the merged csv file:

import pandas as pddff= pd.read_csv('./test.csv')

Let me know if this works.

Stop here

ARCHIVE: Previous solution; good to example of using dask to merge CSVs

The first step is making sure you have dask installed. There are install instructions for dask in the documentation page but this should work:

With dask installed it's easy to read in the files.

Some housekeeping first. Assume we have a directory with csvs where the filenames are my18.csv, my19.csv, my20.csv, etc. Name standardization and single directory location are key. This works if you put your csv files in one directory and serialize the names in some way.

In steps:

  1. Import dask, read all the csv files in using wildcard. This merges all csvs into one single dask.dataframe object. You can do pandas-like operation immediately after this step if you want.
import dask.dataframe as dd  
ddf = dd.read_csv('./daskTest/my*.csv')
ddf.describe().compute()
  1. Write merged dataframe file to disk in the same directory as original files and name it master.csv
ddf.to_csv('./daskTest/master.csv',index=False)
  1. Optional, read master.csv, a much bigger in size, into dask.dataframe object for computations. This can also be done after step one above; dask can perform pandas like operations on the staged files...this is a way to do "big data" in Python
# reads in the merged file as one BIG out-of-core dataframe; can perform functions like pangas    
newddf = dd.read_csv('./daskTest/master.csv')

#check the length; this is now length of all merged files. in this example, 50,000 rows times 11 = 550000 rows.len(newddf)

# perform pandas-like summary stats on entire dataframe
newddf.describe().compute()

Hopefully this helps answer your question. In three steps, you read in all the files, merge to single dataframe, and write that massive dataframe to disk with only one header and all your rows.

Post a Comment for "Pandas.io.common.cparsererror: Error Tokenizing Data. C Error: Buffer Overflow Caught - Possible Malformed Input File"