代码之家  ›  专栏  ›  技术社区  ›  pceccon

Dask无法读取文件,而Pandas没有

  •  1
  • pceccon  · 技术社区  · 6 年前

    我以前用熊猫来读取和处理数据,有一些记忆问题。我可以读一个大文件:

    import pandas as pd
    df = pd.read_csv('mydata.csv.gz', sep=';')
    

    但是,在对Dask执行相同操作时,我得到一个错误:

    import dask.dataframe as dd
    df_base = dd.read_csv('CoilsSampleFiltered.csv.gz', sep=';')
    

    回溯:

    ---------------------------------------------------------------------------
    UnicodeDecodeError                        Traceback (most recent call last)
    <ipython-input-7-abc513f2a657> in <module>()
    ----> 1 df_base = dd.read_csv('CoilsSampleFiltered.csv.gz', sep=';')
    
    ~\AppData\Local\Continuum\Anaconda3\lib\site-packages\dask\dataframe\io\csv.py in read(urlpath, blocksize, collection, lineterminator, compression, sample, enforce, assume_missing, storage_options, **kwargs)
        424                            enforce=enforce, assume_missing=assume_missing,
        425                            storage_options=storage_options,
    --> 426                            **kwargs)
        427     read.__doc__ = READ_DOC_TEMPLATE.format(reader=reader_name,
        428                                             file_type=file_type)
    
    ~\AppData\Local\Continuum\Anaconda3\lib\site-packages\dask\dataframe\io\csv.py in read_pandas(reader, urlpath, blocksize, collection, lineterminator, compression, sample, enforce, assume_missing, storage_options, **kwargs)
        324 
        325     # Use sample to infer dtypes
    --> 326     head = reader(BytesIO(b_sample), **kwargs)
        327 
        328     specified_dtypes = kwargs.get('dtype', {})
    
    ~\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\io\parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, skipfooter, skip_footer, doublequote, delim_whitespace, as_recarray, compact_ints, use_unsigned, low_memory, buffer_lines, memory_map, float_precision)
        707                     skip_blank_lines=skip_blank_lines)
        708 
    --> 709         return _read(filepath_or_buffer, kwds)
        710 
        711     parser_f.__name__ = name
    
    ~\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\io\parsers.py in _read(filepath_or_buffer, kwds)
        447 
        448     # Create the parser.
    --> 449     parser = TextFileReader(filepath_or_buffer, **kwds)
        450 
        451     if chunksize or iterator:
    
    ~\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\io\parsers.py in __init__(self, f, engine, **kwds)
        816             self.options['has_index_names'] = kwds['has_index_names']
        817 
    --> 818         self._make_engine(self.engine)
        819 
        820     def close(self):
    
    ~\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\io\parsers.py in _make_engine(self, engine)
       1047     def _make_engine(self, engine='c'):
       1048         if engine == 'c':
    -> 1049             self._engine = CParserWrapper(self.f, **self.options)
       1050         else:
       1051             if engine == 'python':
    
    ~\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\io\parsers.py in __init__(self, src, **kwds)
       1693         kwds['allow_leading_cols'] = self.index_col is not False
       1694 
    -> 1695         self._reader = parsers.TextReader(src, **kwds)
       1696 
       1697         # XXX
    
    pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader.__cinit__()
    
    pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._get_header()
    
    UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte
    

    我想弄清楚是什么问题。文件是由R编写的,它默认使用utf-8。

    1 回复  |  直到 6 年前
        1
  •  3
  •   Eric Yang    6 年前

    您没有读取csv文件。熊猫可能已经自动检测到了压缩。如果要使用dask,需要指定压缩方案。

    df = dd.read_csv("CoilsSampleFiltered.csv.gz", compression='gzip')