Cloud computing Introduction
History of Cloud Computing
Cloud computing companies
AMAZON: It provides and including S3, simpleDB and EC2
Rackspace’s: It provides and includes the cloud sites, cloud drive and cloud servers.
GoGrid’s: It offers and including cloud storage and cloud Hosting.
IBM’s: It provides and including the computing on demand. And having the smart business storage cloud.
AT&T: It provides and includes the service in Synaptic storage and the service as a computing.
Google App Engine: It used to developing the Python and Java by using the platform of Google app engine.
com: It used to develop the proprietary programming language and Apex. By using the platform of Force.com.
Microsoft Azure: It provide the developing the .net by using the Microsoft Azure.
Google: It is providing the Saas Space. It includes the Gmail, Google docs, Google Calendar and Picasa.
IBM: it is offering the web based email services and Lotuslive iNotes. it provides the calendaring capabilities and messaging to business users.
Zoho: It is look like as the Microsoft office suite on online products.
Importance of the cloud computing
Back-up and restore data
10 principles of effective website design
Recovery models in SQL server that enable you to determine the way SQL Server manage the log files. And ready your DB for recovery after information loss or any other issue. Each of these speaks to an alternate way to deal with adjusting the trade-off. Between protecting disk space and granular disaster recovery options.
Simple RECOVERY MODEL
SQL Server keeps up a minimal measure of data in the transaction log. SQL Server removes the txn log each time the DB achieves a txn checkpoint. leaving no log sections for disaster recovery uses.
By using this model, we able to recover the full or differential backups only. It doesn’t achieve to restore such a database to a given moment. — you can restore it to the correct time when a full or differential backup happened. Hence, you will lose any information changes set aside. A few minutes of the latest full/differential backup and the time of the fail.
FULL RECOVERY MODEL
With this model, SQL Server cares the Txn log until you back it up.
This enables you to draw a disaster recovery plan. That contain both Full & differential backups in conjunction with txn log backups.
In case of a database failure, you have to restore databases using full recovery. Also to protect the data changes saved in txn log files. the full recovery model enables you to restore a DB to a particular moment.
For ex: if an incorrect alteration corrupted your information at 4:36 p.m. on Monday. You could use SQL Server’s point-in-time restore to rollback your database to 4:35 p.m.
Bulk LOGGED RECOVERY MODEL
The bulk-logged recovery model is same as full recovery model. The main difference is it can handle the bulk changes done in databases. The bulk-logged model records these operations in the txn log. Utilizing a method known as minimal logging. Yet keeps you from utilizing the point-in-time restore option.
Microsoft prescribes that the bulk-logged recovery model utilized for brief time-frames. Best practice directs that you change a database to the bulk-logged recovery model. Before leading mass operations and restore it to the full recovery model. When those operations finish.
CHANGING RECOVERY MODELS in SQL Server
- Open the SQL server
- Select the database:
Expand Databases, select user DB or Sys DB.
- Open the Database Properties:
Right-tap the database, and after that click on Properties.
- View the present Recovery Model:
In the Select, a page sheet, click on Options to see the present Recovery model.
- Select the new Recovery Model:
Select either Full, Bulk-logged, or Simple.
- click OK.
In statistics there are two main methods to use to calculate the statistics in data analysis. They are descriptive statistics and inferential statistics. In descriptive statistics which is using the simple indexes to outline the data . like as the measure of central tendency and measure of dispersion. In inferential statistics which used to draw. There are Different types of Statistics we have. The development of the data subject in random variations.
Measure of central Tendency
In every organisation data warehousing supposed to use in decision making process. It integrated, subject oriented, time-variant and non-volatile collection of data.
The object of the Data Warehousing is to help the people to understand. The high level data of the implementation of the successful DWH project. The experience on this projects. To help the people in business intelligence professional. In both the vendors and the clients.
Components of the Data warehousing
Components of the Data warehousing can be classified into six major parts. They are
The business intelligence tools selections and the DWH selection teams. These tools covered in
- ETL (Extraction, Transformation, and Loading)
- Database, Hardware
Steps: In this selection the DWH project contains the typical milestones. To production roll-out and beyond from the gathering requirements, query optimisation. DWH observations are also available in the fields.
In data warehousing is one of the important part of the business intelligence. In this step can explain the relationship. Between the DWH and the business intelligence. And also discuss about the business intelligence.
In this section can explain the lists based on trends and the data warehousing field:
- Industry consolidation
- Lack of collaboration with data mining efforts
- How to measure success
- Quick implementation time
- Recipes for data warehousing project failure
The DWH Quality Management
The DWH quality management delivers the quality solutions in end to end process. It enable to data profiling and data quality. In the implementing the data warehousing is the important process. In data collection establish the generates mapping. It is keep the check on the storage repository and the metadata. These based on the business rules and the ethics.
There are four primary phases on the data warehousing life cycle. They are
- Assessment Quality
- Design Quality
- Transformation Quality
- Monitoring Quality
The DWH design can begin. After the terms and tools made in personnel selection. In data warehousing life cycle there are many typical steps involved. They are
- Need Gathering
- Physical Environment Setup
- Data Modeling
- OLAP Cube Design
- Front End Development
- Report Development
- Performance Tuning
- Query Optimization
- Quality Assurance
- Rolling out to Production
- Production Maintenance
- Incremental Enhancements
If we have to design DWH then the above steps are very important. The above steps are typical to DWH designing phase. And it has so many different sections.
During the particular DWH designing phase. This section explains what type of typical needs can accomplished.
Estimating some time the particular DWH tasks takes place.
At the end of the typical tasks in DWH one or more documents produced. The particular task results explained in the steps. They are very important to communicate with the consultants. These results are communicating with the clients and the consultants.
This is depends on the out of the watch. Some of them clear, and some of them are not real in data warehousing. Or all them are Real.
Big Data Importance
Big data history and current applications
Basics of the Dimensions and measure
Green fields = continuous (create axes)
Blue fields = discrete (create headers)
Bold fields = sorted
Fields with AGG() or something else aggregated
Fields with no () are Discrete (often a dimension, not )
ATTR() runs something like “if MIN(var) = MAX(var) then return var”, so it’s often the largest value.