欢迎访问 中国直播网!遇见美好,记录事实!Meet the good, record the facts!

中国直播网微博  直播网微博   网站地图   商标版权注册证   直播号入驻

Hadoop相关职位面试中的六个常见问题及答案

2016-12-24 17:50来源:编辑:轩皓宇

随着大数据热潮的持续汹涌,中国直播网 ,很多技术从业者朋友已经按捺不住心中的悸动,打算在这一新的领域施展身手。然而,除了积累丰富的专业知识与相关实践经验之外,面试同样是不容忽视的一环。考虑到这类职位刚刚出现不久,很多朋友对于面试内容并不了解。别怕,下面我们将共同了解其中经常出现的六类问题。

\

1. Hadoop是什么?

2. 为什么企业由传统数据仓库工具转向基于Hadoop生态系统的智能数据枢纽?

3. 智能化/大规模数据枢纽在架构层面与传统数据仓库有哪些不同?

4. 基于Hadoop的数据枢纽拥有哪些优势?

5. 大数据解决方案中的关键性步骤有哪些?

6. 在进行数据存储与处理时,您是如何选择文件格式的?选择的理由是什么?

英文原文:6 Frequently Asked Hadoop Interview Questions and Answers

Are you preparing for an interview soon and need to have knowledge of Hadoop? DON'T PANIC! Here are some questions you may be asked and the answers you should try to give.

Q1.What is Hadoop?

Hadoop is an open-source software framework for storing large amounts of data and processing/querying those data on a cluster with multiple nodes of commodity hardware (i.e. low-cost hardware). In short, Hadoop consists of the following:

HDFS (Hadoop Distributed File System): HDFS allows you to store huge amounts of data in a distributed and a redundant manner. For example, a 1 GB (i.e 1024 MB) text file can be split into 16 * 128MB files and stored on 8 different nodes in a Hadoop cluster. Each split can be replicated 3 times for fault tolerance so that if 1 node goes down, you have backups. HDFS is good for sequential write-once-and-read-many times type access.

 

\

 

MapReduce: A computational framework. This processes large amounts of data in a distributed and parallel manner. When you do a query on the above 1 GB file for all users with age > 18, there will be say “8 map” functions running in parallel to extract users with age > 18 within its 128MB split file, and then the “reduce” function will run to combine all the individual outputs into a single final result.

YARN (Yet Another Resource Nagotiator): A framework for job scheduling and cluster resource management.

Hadoop ecosystem, with 15+ frameworks & tools like Sqoop, Flume, Kafka, Pig, Hive, Spark, Impala, etc to ingest data into HDFS, to wrangle data (i.e. transform, enrich, aggregate, etc) within HDFS, and to query data from HDFS for business intelligence & analytics. Some tools like Pig and Hive are abstraction layers on top of MapReduce, whilst the other tools like Spark and Impala are improved architecture/design from MapReduce for much-improved latencies to support near real-time (i.e. NRT) and real-time processing.

 

\

 

Q2. Why Are Organizations Moving from Traditional Data Warehouse Tools to Smarter Data Hubs Based on Hadoop Ecosystems?

Organizations are investing to enhance their:

Existing data infrastructure:

predominantly using “structured data” stored in high-end and expensive hardwares
predominantly processed as ETL batch jobs for ingesting data into RDBMS and data warehouse systems for data mining, analysis & reporting to make key business decisions.
predominantly handle data volumes in gigabytes to terabytes

Smarter data infrastructure based on Hadoop where

structured (e.g. RDBMS), unstructured (e.g, images, PDFs, docs ), and semi-structured (e.g. logs, XMLs) data can be stored in cheaper commodity machines in a scalable and fault tolerant manner.
data can be ingested via batch jobs and near real time (i.e. NRT, 200ms to 2 seconds) streaming (e.g. Flume and Kafka).
data can be queried with low latency (i.e under 100ms) capabilities with tools like Spark & Impala.
larger data volumes in terabytes to petabytes can be stored.

This empowers organizations to make better business decisions with smarter and bigger data with more powerful tools to ingest data, to wrangle stored data (e.g. aggregate, enrich, transform, etc.), and to query the wrangled data with low-latency capabilities for reporting and business intelligence.

Q3. How Does a Smarter & Bigger Data Hub Architectures Differ from a Traditional Data Warehouse Architectures?

Traditional Enterprise Data Warehouse Architecture

 

\

 

Hadoop-based Data Hub Architecture

 

\

 

Q4. What Are the Benefits of Hadoop-Based Data Hubs?

Improves the overall SLAs (i.e. Service Level Agreements) as the data volume and complexity grows. For example, “Shared Nothing” architecture, parallel processing, memory intensive processing frameworks like Spark and Impala, and resource preemption in YARN’s capacity scheduler.

#p#分页标题#e#

Scaling data warehouses can be expensive. Adding additional high-end hardwarecapacities and licensing of data warehouse tools can cost significantly more. Hadoop-based solutions can not only be cheaper with commodity hardware nodes and open-source tools, but also can complement the data warehouse solution by offloading data transformations to Hadoop tools like Spark and Impala for more efficient parallel processing of Big Data. This will also free up the data warehouse resources.

Exploration of new avenues and leads. Hadoop can provide an exploratory sandbox for the data scientists to discover potentially valuable data from social media, log files, emails, etc., that are not normally available in data warehouses.

Better flexibility. Often business requirements change, and this requires changes to schema and reports. Hadoop-based solutions are not only flexible to handle evolving schemas, but also can handle semi-structured and unstructured data from disparate sources like social media, application log files, images, PDFs, and document files.

Q5. What Are Key Steps in Big Data Solutions?

Ingesting Data, Storing Data (i.e. Data Modelling), and processing data (i.e data wrangling, data transformations, and querying data).

Ingesting Data

Extracting data from various sources such as:

RDBMs Relational Database Management Systems like Oracle, MySQL, etc.
ERPs Enterprise Resource Planning (i.e. ERP) systems like SAP.
CRM Customer Relationships Management systems like Siebel, Salesforce, etc.
Social Media feeds and log files.
Flat files, docs, and images.

And storing them on data hub based on “Hadoop Distributed File System”, which is abbreviated as HDFS. Data can be ingested via batch jobs (e.g. running every 15 minutes, once every night, etc), streaming near-real-time (i.e 100ms to 2 minutes) and streaming in real-time (i.e. under 100ms).

One common term used in Hadoop is “Schema-On-Read“. This means unprocessed (aka raw) data can be loaded into HDFS with a structure applied at processing time based on the requirements of the processing application. This is different from “Schema-On-Write”, which is used in RDBMs where schema needs to be defined before the data can be loaded.

Storing Data

Data can be stored on HDFS or NoSQL databases like HBase. HDFS is optimized for sequential access and the usage pattern of “Write-Once & Read-Many”. HDFS has high read and write rates as it can parallelize I/O s to multiple drives. HBase sits on top of HDFS and stores data as key/value pairs in a columnar fashion. Columns are clubbed together as column families. HBase is suited for random read/write access. Before data can be stored in Hadoop, you need consider the following:

Data Storage Formats: There are a number of file formats (e.g CSV, JSON, sequence, AVRO, Parquet, etc.) and data compression algorithms (e.g snappy, LZO, gzip, bzip2, etc.) that can be applied. Each has particular strengths. Compression algorithms like LZO and bzip2 are splittable.

Data Modelling: Despite the schema-less nature of Hadoop, schema design is an important consideration. This includes directory structures and schema of objects stored in HBase, Hive and Impala. Hadoop often serves as a data hub for the entire organization, and the data is intended to be shared. Hence, carefully structured and organized storage of your data is important.

Metadata management: Metadata related to stored data.

Multitenancy: As smarter data hubs host multiple users, groups, and applications. This often results in challenges relating to governance, standardization, and management.

Processing Data

Hadoop’s processing framework uses the HDFS. It uses the “Shared Nothing” architecture, which in distributed systems each node is completely independent of other nodes in the system. There are no shared resources like CPU, memory, and disk storage that can become a bottleneck. Hadoop’s processing frameworks like Spark, Pig, Hive,Impala, etc., processes a distinct subset of the data and there is no need to manage access to the shared data. “Sharing nothing” architectures are very scalable as more nodes can be added without further contention and fault tolerant as each node is independent, and there are no single points of failure, and the system can quickly recover from a failure of an individual node.

Q6. How Would You Go About Choosing Among the Different File Formats for Storing and Processing Data?

One of the key design decisions is regarding file formats based on:

Usage patterns like accessing 5 columns out of 50 columns vs accessing most of the columns.
Splittability to be processed in parallel.
Block compression saving storage space vs read/write/transfer performance
Schema evolution to add fields, modify fields, and rename fields.

CSV Files

#p#分页标题#e#

CSV files are common for exchanging data between Hadoop & external systems. CSVs are readable and parsable. CSVs are handy for bulk loading from databases to Hadoop or into an analytic database. When using CSV files in Hadoop never include header or footer lines. Each line of the file should contain records. CSV files limited support for schema evaluations as new fields can only be appended to the end of a record and existing fields can never be limited. CSV files do not support block compression, hence compressing a CSV file comes at a significant read performance cost.

JSON Files

JSON records are different from JSON files; each line is its own JSON record. As JSON stores both schema and data together for each record, it enables full schema evolution and splitability. Also, JSON files do not support block level compression.

Sequence Files

Sequence files store data in binary format with a similar structure to CSV files. Like CSV, Sequence files do not store metadata, hence only schema evolution is appending new fields to the end of the record. Unlike CSV files, Sequence files do support block compression. Sequence files are also splittable. Sequence files can be used to solve “small files problem” by combining smaller XML files by storing the filename as the key and the file contents as the value. Due to complexity in reading sequence files, they are more suited for in-flight (i.e. intermediate) data storage.

Note: A SequenceFile is Java-centric and cannot be used cross-platform.

Avro Files

These are suited for long term storage with schema. Avro files store metadata with data, but also allow specification of independent schema for reading the file. This enables fullschema evolution support allowing you to rename, add, and delete fields and change data types of fields by defining a new independent schema. Avro file defines the schema in JSON format, and the data will be in binary JSON format. Avro files are alsosplitable and support block compression. More suited in usage patterns where row level access is required. This means all the columns in the row are queried. Not suited when a row has 50+ columns and the usage pattern requires only 10 or less columns to be accessed. Parquet file format is more suited for this columnar access usage pattern.

Columnar Formats, e.g. RCFile, ORC

RDBMs store records in a row-oriented fashion as this is efficient for cases where many columns of a record need to be fetched. Row-oriented writing is also efficient if all the column values are known at the time of writing a record to the disk. But this approach would not be efficient to fetch just 10% of the columns in a row or if all the column values are not known at the time of writing. This is where columnar files make more sense. So columnar format works well

skipping I/O and decompression on columns that are not part of the query
for queries that only access a small subset of columns.
for data-warehousing-type applications where users want to aggregate certain columns over a large collection of records.

RC & ORC formats are specifically written in Hive and not general purpose as Parquet.

Parquet Files

Parquet file is a columnar file like RC and ORC. Parquet files support block compression and optimized for query performance as 10 or less columns can be selected from 50+ columns records. Parquet file write performance is slower than noncolumnar file formats. Parquet also support limited schema evolution by allowing new columns to be added at the end. Parquet can be read and written to with Avro APIs and Avro schemas.

So, in summary, you should favor Sequence, Avro, and Parquet file formats over the others; Sequence files for raw and intermediate storage, and Avro and Parquet files for processing.

中国直播网

特别声明:本文为中国直播网直播号作者或机构上传并发布,仅代表该作者或机构观点,不代表中国直播网的观点或立场,中国直播网仅提供信息发布平台。
       版权声明:版权归著作权人,转载仅限于传递更多信息,如来源标注错误侵害了您的权利,请来邮件通知删除,一起成长谢谢
       欢迎加入:直播号,开启无限创作!一个敢纰漏真实事件,说真话的创作分享平台,一个原则:只要真实,不怕事大,有线索就报料吧!申请直播号请用电脑访问https://zbh.zhibotv.com.cn。    

标签:
相关资讯
热门频道

热门标签

CopyRight 2014-2024 中国直播网(直播网)ZhiBoTv.Com.Cn(中國直播網有限公司)| 本站取得授权享有第17448205号“直播网”商标注册证 | 中国直播网投稿公邮:news@newsgo.com

直播网网站所登载资讯、图集、视频等内容,版权归直播号自媒体平台原作者或投稿人所有,投稿视为本站原创首发,刊发或转载仅限传播目的非本网观点,未经授权请勿转载或商业用途。

特别声明:中国直播网仅提供平台运营服务,不提供任何上传发布服务,中国直播网尊重知识产权保护,侵权反馈:fawu@newsgo.com 直播网撤稿函下载 如有侵权请来邮告知,我们收到后会尽快处理答复。 Powered by EyouCms 备案号:吉ICP备2023004346号-1