1.HDF5 is a completely new Hierarchical Data Format product consisting of a data format specification and a supporting library implementation. HDF5 is designed to address some of the limitations of the older HDF product . Note that HDF and HDF5 are two different products. HDF is a data format first developed in the 1980s and currently in Release 4. x (HDF Release 4. x ).HDF5 is a new data format first released in Beta in 1998 and designed to better meet the ever-increasing demands of scientific computing and to take better advantage of the ever-increasing capabilities of computing systems.HDF5 is currently in Release 1. x (HDF5 Release 1. x ). 2.HDF5 files are organized in a hierarchical structure, with two primary structures: groups and datasets . HDF5 group: a grouping structure containing instances of zero or more groups or datasets, together with supporting metadata. An HDF5 group is a structure containing zero or more HDF5 objects. A group has two parts A group header , which contains a group name and a list of group attributes. A group symbol table, which is a list of the HDF5 objects that belong to the group. HDF5 dataset: a multidimensional array of data elements, together with supporting metadata. A dataset is stored in a file in two parts: a header and a data array. The header contains information that is needed to interpret the array portion of the dataset, as well as metadata (or pointers to metadata) that describes or annotates the dataset. Header information includes the name of the object, its dimensionality, its number-type, information about how the data itself is stored on disk, and other information used by the library to speed up access to the dataset or maintain the file's integrity. There are four essential classes of information in any header: name , datatype , dataspace , and storage layout : Name. A dataset name is a sequence of alphanumeric ASCII characters. Datatype. HDF5 allows one to define many different kinds of datatypes. There are two categories of datatypes: atomic datatypes and compound datatypes. Atomic datatypes can also be system-specific, or NATIVE, and all datatypes can be named : Atomic datatypes include integers and floating-point numbers. Each atomic type belongs to a particular class and has several properties: size, order, precision, and offset. IEEE floating point datatypes: 32-bit and 64-bit Big-endian and little-endian H5T_IEEE_F32BE H5T_IEEE_F32LE H5T_IEEE_F64BE H5T_IEEE_F64LE Dataspace. A dataset dataspace describes the dimensionality of the dataset. The dimensions of a dataset can be fixed (unchanging), or they may be unlimited , which means that they are extendible (i.e. they can grow larger). Properties of a dataspace consist of the rank (number of dimensions) of the data array, the actual sizes of the dimensions of the array, and the maximum sizes of the dimensions of the array. For a fixed-dimension dataset, the actual size is the same as the maximum size of a dimension. When a dimension is unlimited, the maximum size is set to the value H5P_UNLIMITED. From: https://support.hdfgroup.org/HDF5/doc1.8/H5.intro.html
最近装netcdf, 发现网上的许多方法都不完善,许多库文件与头文件的位置没有加入默认的文件夹里,以至于安装netcdf总会提醒缺少这个库,那个库的。亲自动手,记录安装过程,以便查询 装netcdf 需要 zlib szip netcdf hdf5 curl mpich2 这些软件 其中zlib,szip是hdf5与netcdf需要的库文件,二者可以二选一。 (1) 下载这些源代码 (2) 安装zlib tar -zxvf zlib***.tar.gz cd zlib* mkdir /usr/local/zlib ./configure --prefix=/usr/local/zlib --libdir=/usr/lib/ --includedir=/usr/lib/ --sharedlibdir=/usr/share 其中 --libdir=/usr/lib/ --includedir=/usr/lib/一定写上,把库文件与头文件装在系统默认搜寻的地方,免得后面设置LD_LIBRARY_PATH ,如果不懂可用./configure --help 查看 make make check make install (3)安装szip tar -zxvf szip***.tar.gz cd szip* mkdir /usr/local/szip ./configure --prefix=/usr/local/szip --libdir=/usr/lib/ --includedir=/usr/lib/ --sharedstatedir=/usr/share --bindir=/usr/bin/ 其中--bindir=/usr/bin/表示把可执行的二进制的文件放入默认的地方,就不用设置PATH了 make make check make install (3) 安装curl tar -zxvf curl***.tar.gz cd curl* mkdir /usr/local/curl ./configure --prefix=/usr/local/curl --libdir=/usr/lib/ --includedir=/usr/include/ --sharedstatedir=/usr/share --bindir=/usr/bin/ make make check make install (4)安装mpich2 tar -zxvf mpich***.tar.gz cd mpich* mkdir /usr/local/mpich ./configure --prefix=/usr/local/mpich --libdir=/usr/lib/ --includedir=/usr/lib/ --sharedstatedir=/usr/share --bindir=/usr/bin/ make make check make install (5) 安装hdf5 tar -zxvf hdf5***.tar.gz cd hdf5* mkdir /usr/local/hdf5 ./configure --prefix=/usr/local/hdf5 --libdir=/usr/lib/ --includedir=/usr/lib/ --sharedstatedir=/usr/share --bindir=/usr/bin/ make make check make install (6)安装netcdf tar -zxvf netcdf***.tar.gz cd netcdf* mkdir /usr/local/netcdf ./configure --prefix=/usr/local/netcdf --libdir=/usr/lib/ --includedir=/usr/lib/ --sharedstatedir=/usr/share --bindir=/usr/bin/ 如果装了intel编译器,会提示math.h的错误,加上CC=icc make make check make install 完美收官!