Data warehouses are an integral component for business decision-making when a large volume of data has to be analyzed. Data in data warehouses traditionally comes from sources including, but not limited to, relational databases, legacy systems, and flat files. The data from these sources has to be processed to conform to the data model and type of the data warehouse. Extract, Transform, and Load (ETL) tools are used to take data from multiple sources to populate a data warehouse, typically into a star schema. After the data is loaded into the star schema, it can be used for query from reporting tools, also known as the reporting layer of the data warehouse. The Main purpose of this research is to come up with an approach to solve this Data warehousing Big Data problem. This thesis will present a way that can be integrated with both an existing and a new Data warehouse. I will be presenting a hybrid approach to solve this Big Data problem that makes use of the best of both the worlds i.e. Hadoop and Relational databases. I will also present the sample implementation of the Model