Buy Products not in the Philippines

Galleon.PH - Discover, Share, Buy!
Handbook of Massive Data Sets
Handbook of Massive Data Sets
Handbook of Massive Data Sets

Handbook of Massive Data Sets (Massive Computing)

Product ID : 37553186

Galleon Product ID 37553186
UPC / ISBN 146134882X
Shipping Weight 3.55 lbs
I think this is wrong?
Binding: Paperback
(see available options)
Shipping Dimension 9.21 x 6.18 x 2.09 inches
I think this is wrong?
Edition Softcover Reprint Of The Original 1st Ed. 2002
Number Of Pages 1223
Publication Date 2013-12-30

*Used item/s available.
*Price and Stocks may change without prior notice

Pay with

About Handbook Of Massive Data Sets

The proliferation of massive data sets brings with it a series of special computational challenges. This "data avalanche" arises in a wide range of scientific and commercial applications. With advances in computer and information technologies, many of these challenges are beginning to be addressed by diverse inter-disciplinary groups, that indude computer scientists, mathematicians, statisticians and engineers, working in dose cooperation with application domain experts. High profile applications indude astrophysics, bio-technology, demographics, finance, geographi­ cal information systems, government, medicine, telecommunications, the environment and the internet. John R. Tucker of the Board on Mathe­ matical Seiences has stated: "My interest in this problern (Massive Data Sets) isthat I see it as the rnost irnportant cross-cutting problern for the rnathernatical sciences in practical problern solving for the next decade, because it is so pervasive. " The Handbook of Massive Data Sets is comprised of articles writ­ ten by experts on selected topics that deal with some major aspect of massive data sets. It contains chapters on information retrieval both in the internet and in the traditional sense, web crawlers, massive graphs, string processing, data compression, dustering methods, wavelets, op­ timization, external memory algorithms and data structures, the US national duster project, high performance computing, data warehouses, data cubes, semi-structured data, data squashing, data quality, billing in the large, fraud detection, and data processing in astrophysics, air pollution, biomolecular data, earth observation and the environment.