Python-Introduction

Python Introduction

Python is a pure object oriented language created by guido van rossum. It was released in February 1991. Python is a open source language, you can even add new features to the python source code. It is a beginers language anyone can start coding with python. As it is a interpreter oriented language it executes line by line and interrupts execution at the fisrt error occurence. Python latest version is 3.7

Why Python is so powerful?

  1. Open Source
  2. Broad range of libraries
  3. Portable
  4. Extention
  5. high-level
  6. object oriented language
  7. scalable
  8. interpreted

Application of Python?

  1. Web Application
  2. Scientific and Numeric Computing
  3. Image Processing
  4. Data Science
  5. I, Machine Learning, Deep Learning
  6. Robotics
  7. Game Desigining
  8. I.S

Is python Procedural/Object Oriented language?

            It supports both procedural oriented programming as well as object-oriented programming. As python is developed from different languages, it takes functional oriented features from C language, object-oriented features from C++, scripting from Perl & shell script, Modular programming from module-3 and syntax from C, ABC languages.

             In python, the most common tasks are grouped into functions. Python also consists of a number of predefined functions. So it is considered a procedural oriented programming language.

 Eg:- type(),len(), max(),print()…

 

            On the other hand, everything in python is a class or an object of that class, inheriting the class properties. So python is a object-oriented language.

Eg:- >>> a=10

        >>>type(a)

       <class ‘int’>

      Variables and Datatypes

            Python is a dynamically typed/ type-inferred languages. It means, there is no need  to declare the datatype of a variable before it is used.The variables can be assigned directly with any value. Python interpreter recognises the datatype of the variable based on the value assigned to it. Their is no limit/range for any datatype in python. To know the datatype of a variable, we can use the type () function. 

Eg:-

       >>> s=’supernatural’

      >>> type(s)

      <class ‘str’>

Rules to declare variable/identifiers:

            The names given for function, class and modules is called identifieres. The following rules must follow to declare variable/identifiers are:

  1. It should start with A-Z or a-z or uderscore(_) and can followed by 0-9
  2. Keyword should not be used
  3. Special characters, punctuation marks should not be used

Eg:-

            >>>_a=500 (#valid)

            >>> a  b=500(#invalid)

            >>> 1f=500(#invalid)

            >>>a6gf=500(#valid)

Datatypes:

            In python their are six major datatypes 

  1. Number
  2. String
  3. List
  4. Tuple
  5. Set
  6. Dictionary

Every datatype is a class in python, It contains data and methods. When a variable is initialised with a value the datatype of the variable is automatically detected  and the variable becomes an object of that particular class. It can access the data and method of that class.

These datatypes are divided into two different categories

  1. immutable datatypes
  2. mutable datatypes

Immutable Datatypes:

            Data cannot be modified for immutable datatypes. Numbers,Strings and  Tuples are immutable datatypes

Mutable Datatypes:

            Data can be modified for mutable datatypes. Lists,Sets, and Dictionaries will come under these categories

List

           list is a sequence datatype in python. It is mutable datatype. List is declared by enclosing the elements in sqaure brackets []  and seperated by commas(,). It will support heterogeneous data.

 

>>> L=[1,2,5,7,10]

>>>type(L)

<class ‘list’>

>>>L=[ 5, ‘python’, 4.5, 4+5j, [1,3,5,8]]

List Sclicing:

            The clicing operator [] is used to access the elements of the list using the index of the list. The index of the list start with the zero and ends with len(list)-1.  len()  function returns length of the list.  

>>>L[1]

‘python’

>>>L[4] [1,3,5,8]

            The elements in the list can also be accessed by using the reversed indexing. Reverse indexing start with -1 from the ending of the list to -len(list)

>>>L[-1] [1,3,5,8]

>>>L[-3]

4.5

Range Slicing:

            The range slicing operator [:] is used to access a sublist from the given list [start: end].

The returned sublist starts with the index position start to the index position-1.

 >>> L[0:3] [5, ‘python’, 4.5]

>>>L[-3 : -1] [4.5, (4+5j)]

If starting index postion is not assigned then it will return from the starting of the list to the specified end position.

>>> L[:2]

 [5, ‘python’]

>>>L[-3] [5, ‘python’]

If the ending position is not specified then the list will be returned from the given starting position to the end of the list.

>>>L[2:] [5, ‘python’]

The list L contains string and list as elements the are sequence datatypes. We can access those individual elements also.

>>> L[1][ : -3]

‘pyt’

>>> L[-1][-1]

8

>>>L[-1][-2:] [5, 8]

*Note: List will take three parameters as input one is start, stop and step(optional).

>>> L[ : : -1] [[1, 3, 5, 8], (4+5j), 4.5, ‘python’, 5]

>>> L[0 : 3: -1] []

>>> L[ 3 : 0 : -1] [(4+5j), 4.5, ‘python’]

>>> L[-1 : -5 : -1] [[1, 3, 5, 8], (4+5j), 4.5, ‘python’]

>>>

List Methods:

            By default list class contains some methods we can access these methods by creating list variable. This variable acts as list class object / instance. To access list methods we need call using.

 object_name.method_name()

>>> L=[ 5, ‘python’, 4.5, 4+5j, [1,3,5,8]]  #L is list class object

  1. Append Method:

            append method will add the elements/object to the existing list at last position.

>>> L1=[2,35,6]

>>> L.append(L1) # L1 is other list object

>>> L

[5, ‘python’, 4.5, (4+5j), [1, 3, 5, 8], [2, 35, 6]]

>>> L.append(‘india’) # directly we can add elements in this way

>>> L

[5, ‘python’, 4.5, (4+5j), [1, 3, 5, 8], [2, 35, 6], ‘india’]

>>> 

2, Copy Method:

            copy method will copy the element of one list to other list but their reference will be different.

>>> L3=L.copy()

>>> L

[5, ‘python’, 4.5, (4+5j), [1, 3, 5, 8], [2, 35, 6], ‘india’]

>>> L3

[5, ‘python’, 4.5, (4+5j), [1, 3, 5, 8], [2, 35, 6], ‘india’]

>>> L.append(‘Jhon’) #if we add new element to list L it will not change in L3 as their reference is differ

>>> L

[5, ‘python’, 4.5, (4+5j), [1, 3, 5, 8], [2, 35, 6], ‘india’, ‘Jhon’]

>>> L3

[5, ‘python’, 4.5, (4+5j), [1, 3, 5, 8], [2, 35, 6], ‘india’]

*To check which memory location is pointing by a variable using  id(variable).

>>> id(L)

140081661061832

>>> id(L3)

140081632587016

>>> L2=L #We can directly assign  in this format but it will point to one reference

>>> id(L)

140081661061832

>>> id(L2)

140081661061832

*If any changes performed on list L it will reflect on list L2 also.

>>> L.append(100000)

>>> L

[5, ‘python’, 4.5, (4+5j), [1, 3, 5, 8], [2, 35, 6], ‘india’, ‘Jhon’, 100000]

>>> L2

[5, ‘python’, 4.5, (4+5j), [1, 3, 5, 8], [2, 35, 6], ‘india’, ‘Jhon’, 100000]

>>>

  1. Count Method:

            if will count how many time particular element is exist in list L

>>> L.count(5)

1

  1. Clear Method:

            clear method will clears the entire list elements

>>>L.clear()

[]

 Extend Method:

            It will extends list by appending elements from the iterable.

>>> L=[1,3,4]

>>> L1=[5,6,1]

>>> L.extend(L1)

>>> L

[1, 3, 4, 5, 6, 1]

      6.Index Method:

            Index method will returns index of the first occurence of  particular element. If element not found in list it will rise an exception.

>>> L.index( 1 )  # element 1 is present in two index postions i.e 0,5

0

>>> L.index(10)         # element 10 is not present in list L

Traceback (most recent call last):

  File “<pyshell#4>”, line 1, in <module>

    L.index(10)

ValueError: 10 is not in list

  1. Insert Method:

            This method will take two parameters, one is index position and other is element. If index is not in range it will place element at the end of the list.

>>> L

[1, 3, 4, 5, 6, 1]

>>> L.insert( 2, 55)

>>> L

[1, 3, 55, 4, 5, 6, 1]

>>> L.insert( 100, 55)

>>> L

[1, 3, 55, 4, 5, 6, 1, 55]

>>> L.insert( -4, 55)

>>> L

[1, 3, 55, 4, 55, 5, 6, 1, 55]

 

  1. Pop Method:

            pop method removes and returns the last item from the list. It takes an optional parameter which is index position of the element and removes from the list.

>>> L.pop()

55

>>> L

[1, 3, 55, 4, 55, 5, 6, 1]

>>> L.pop(2)

55

>>> L

[1, 3, 4, 55, 5, 6, 1]

>>> L.pop(99)  # index 99 doesn’t exists list L

Traceback (most recent call last):

  File “<pyshell#23>”, line 1, in <module>

    L.pop(99)

IndexError: pop index out of range

>>>

9.R emove Method:

            this method take ean lement from the list as an argument and removes the first occurence of that element from the list. It raises an exception if the element is not present in the list.

>>> L

[1, 3, 4, 55, 5, 6, 1]

>>> L.remove( 1 )

>>> L

[3, 4, 55, 5, 6, 1]

>>>

  1. Reverse Method: This method reverses the elements of the list in the place.

>>> L.reverse()

>>> L

[1, 6, 5, 55, 4, 3]

>>>  

  1. Sort Method:

            This method sorts the elements of the list in ascending order if all the elements belong to the same datatype. If elements are of different datatypes, it raises an exception.

>>> L

[1, 6, 5, 55, 4, 3]

>>> L.sort()

>>> L

[1, 3, 4, 5, 6, 55]

>>> L2=[ ‘a’ ,1, 4, 5, 44.0]

>>> L2.sort()

Traceback (most recent call last):

  File “<pyshell#32>”, line 1, in <module>

    L2.sort()

TypeError: unorderable types: int() < str()

 

            It take’s two optional parameters, one is key and other one is reverese. Key parameter takes a function and sorts elements in the list based on that function eg:- len,min,max.

            Reverse parameter takes boolean values True and False, By default this parameter is set to False so elements are sorted in ascending order. If parameter is set to True the elements are sorted in descending order.

 

 >>> L3=[ ‘python’ , ‘java’ , ‘C’ , ‘pascal’ , ‘CPP’ , ‘programming’ ]

>>> L3.sort()

>>> L3

[ ‘C’ , ‘CPP’, ‘java’ , ‘pascal’ , ‘programming’ , ‘python’ ]

>>> L3.sort(reverse=True)

>>> L3

[ ‘python’ , ‘programming’ , ‘pascal’ ,  ‘java’ , ‘CPP’ , ‘C’ ]

>>> L3.sort( key=len, reverse=True)

>>> L3

[ ‘programming’ ,  ‘python’ ,  ‘pascal’,  ‘java’ , ‘CPP’ , ‘C’]

>>>

Tosca Meaning

 

Nowadays software testing tools are very useful, better and easier to lead our life. Testing tools are being used in management defects and the case of the management tests. These are very famous and popular among the masses. These Tosca Meaning of testing tools concerned about the functionalities. Tosca Meaning or Tosca is one of the important tools in software testing tools. Tosca origin is from Italian.
 
Tosca Training In Hyderabad

Introduction to Tosca

 
In software applications, the suite of testing Tosca Meaning tricentis is an enterprise. It automated testing tool to provide the test cases. In the end to end process in the comprehensive management system. In Introduction to Tosca, testing applications depend on the LinearQ method from the design. If starting stage of the product creating the trident is creation. This consideration is looking into many aspects. The most important technologies which take it in ahead of the technologies. Which includes the peers and the technologies. They are model-based test techniques and risk-based techniques.
 
Techniques of Tosca
 
Model-based and risk-based testing techniques involved in TOSCA.

Model-based test Technique

 
The main feature of the testing Tosca is the improving the leverage over automation tool. This is due to model-based test techniques. This is the model of the AUT(Application Under Test). This implemented instead of the script automation testing. All the technical details about AUT is the logical script test and the test data. There are two merged and saved on the test execution at the time.
 
Risk-based test technique
 
In this technique used to assess the risk to respect to test the cases. On the identifying the right set of risks affected by them. This is the right set of the suggestion. This used to various black box test techniques. They are Decision Box, boundary testing, equivalence partitioning, and combinatorial methodologies. They are linear expansion etc.
 
Key Features
 
1. The test cases depend on the bases of weighted. And prioritized on the importance and the criticality. Even the features are build up by the reporting mechanism. If the requirements are gaining the points. By the depicting and impact of the technical weak points.
 
2. Business dynamic steering is the most important and main aim of the TOSCA commander. It is not input data it also makes the entire test. These test cases are using or creating the features on drag and drop. And after adding the validations. The test cases providing the dynamic aim natures. It provides the business based descriptions. For automated test cases and the manual testing cases.
 
3. Synthetic, dynamic test data and automated business dynamic test case generation steering. This is one of the primary data develop by the TOSCA. GUI and CLI based test cases. These are the executing and unified handling functional and automated testing.
 

Supported platform

 
-Application programs: Seibel, SAP.
 
-Single point application programs: MS Outlook, MS Excel
 
-Web browsers: Firefox, Internet Explorer
 
-Application Development Environment: PowerBuilder
 
-Frameworks and programming languages: Java, Visual Basic, Delphi, .net including WPF
 
-Host Applications: 3270, 5250.
 
-Protocols and hardware: Web Services (SOAP), ODBC (Oracle driver), Flash.
 

In the market current version of the TOSCA is 12.2. The Tosca test suite involves in

 
-Xscan Tosca (Tosca wizard)
 
-Executing Tosca
 
-Commander Tosca
 
-Test Repository
 
Even also the commander of the TOSCA is the core backbone. Which is also set up the test creation, execution, analysis. And management of the test script creation.
Cloud Computing
 Definition: Cloud computing services are generally known as the sharing and accessing information. through internet web spaces using the hard drives, local servers and personal computers. “Cloud Computing ” can referred as the internet web space.

 Cloud Computing

Cloud computing Introduction

 
It provides everything in the internet services. It can operate and allow the service in access and manage the internet from anywhere. Cloud Computing Introduction explains the Data centres and many data centres. This  involved in the cloud computing technology. This is connecting the every network and its comprises computing software and storage.

History of Cloud Computing

 
Starting stage of the developing technology the client-server is very famous. It is using with combination of the terminal application and the mainframe. During that time the information stored at CPU. This is very expensive. It connected with mainframe and the both types of resources. They are arrange them and connected to small client-server. This is famous by storing the huge amount of data. It is revolution the mass storage and storage capacity.

Cloud computing companies

 
Cloud computing Companies taken from the shapes and sizes. Large vendors are the having the process of launching one or the offering the space. They are launching the different types of products in many startup companies. The major vendors in cloud computing services given below. There are,
 
Cloud computing company’s services infrastructure:
 
  • AMAZON: It provides and including S3, simpleDB and EC2
  • Rackspace’s: It provides and includes the cloud sites, cloud drive and cloud servers.
  • GoGrid’s: It offers and including cloud storage and cloud Hosting.
  • IBM’s: It provides and including the computing on demand. And having the smart business storage cloud.
  • AT&T: It provides and includes the service in Synaptic storage and the service as a computing.
 
Platform as a Service cloud computing companies
 
  • Google App Engine: It used to developing the Python and Java by using the platform of Google app engine.
  • com: It used to develop the proprietary programming language and Apex. By using the platform of Force.com.
  • Microsoft Azure: It provide the developing the .net by using the Microsoft Azure.
Software as a Service companies
 
  • Google: It is providing the Saas Space. It includes the Gmail, Google docs, Google Calendar and Picasa.
  • IBM: it is offering the web based email services and Lotuslive iNotes. it provides the calendaring capabilities and messaging to business users.
  • Zoho: It is look like as the Microsoft office suite on online products.

Importance of the cloud computing

 
Now a day’s cloud computing is more important than the internet service. In 90s the utilisation of the internet is broad. At the same time present days the cloud computing services are also having the same range. All are believing in that cloud computing.
 
Cloud computing is already a reality:
 
In future it will get high value in internet service. This is essential for everyone in future. It is all comes to continuity for itself. This need for the high energy, physical space and especially the ideology. Have better economy.
 
Arguments in favor of the use of Cloud Computing:
 
There are several argumentative reasons for defending the cloud computing. They are discussing in few points. They are
 
  • Elasticity Demand
  • Cost Savings
  • Speed
advantages and disadvantages:
 
Advantages:
 
  • Cost efficiency
  • High Speed
  • Excellent accessibility
  • Back-up and restore data
Disadvantages
 
  • Security issues
  • Low bandwidth
  • Flexibility issues
  • Incompatibility

recovery models in sql server

 

Recovery models in SQL server that enable you to determine the way SQL Server manage the log files. And ready your DB for recovery after information loss or any other issue. Each of these speaks to an alternate way to deal with adjusting the trade-off. Between protecting disk space and granular disaster recovery options.

  • Simple
  • Full
  • Bulk-logged

Simple RECOVERY MODEL

SQL Server keeps up a minimal measure of data in the transaction log. SQL Server removes the txn log each time the DB achieves a txn checkpoint. leaving no log sections for disaster recovery uses.

By using this model, we able to recover the full or differential backups only. It doesn’t achieve to restore such a database to a given moment. — you can restore it to the correct time when a full or differential backup happened. Hence, you will lose any information changes set aside. A few minutes of the latest full/differential backup and the time of the fail.

FULL RECOVERY MODEL

With this model, SQL Server cares the Txn log until you back it up.
This enables you to draw a disaster recovery plan. That contain both Full & differential backups in conjunction with txn log backups.

In case of a database failure, you have to restore databases using full recovery. Also to protect the data changes saved in txn log files. the full recovery model enables you to restore a DB to a particular moment.

For ex: if an incorrect alteration corrupted your information at 4:36 p.m. on Monday. You could use SQL Server’s point-in-time restore to rollback your database to 4:35 p.m.

Bulk LOGGED RECOVERY MODEL

The bulk-logged recovery model is same as full recovery model. The main difference is it can handle the bulk changes done in databases. The bulk-logged model records these operations in the txn log. Utilizing a method known as minimal logging. Yet keeps you from utilizing the point-in-time restore option.

Microsoft prescribes that the bulk-logged recovery model utilized for brief time-frames. Best practice directs that you change a database to the bulk-logged recovery model. Before leading mass operations and restore it to the full recovery model. When those operations finish.

CHANGING RECOVERY MODELS in SQL Server

  • Open the SQL server
  • Select the database:
    Expand Databases, select user DB or Sys DB.
  • Open the Database Properties:
    Right-tap the database, and after that click on Properties.
  • View the present Recovery Model:
    In the Select, a page sheet, click on Options to see the present Recovery model.
  • Select the new Recovery Model:
    Select either Full, Bulk-logged, or Simple.
  • click OK.
Different types of Statistics

 Different types of Statistics

 

In statistics there are two main methods to use to calculate the statistics in data analysis. They are descriptive statistics and inferential statistics. In descriptive statistics which is using the simple indexes to outline the data . like as the measure of central tendency and measure of dispersion. In inferential statistics which used to draw. There are Different types of Statistics we have.  The development of the data subject in random variations.

 

Descriptive Statistics

It used to explain the basics of the data to study. The sample and the measures used to offer to develop the data. The sample and the graphic measures used to analyse the data. The quantitative analysis of the data used to basis the formation. In this Descriptive Statistics to explain the measure of central tendency. And the measure of the speed.
 

Measure of central Tendency

 
In this distribution the general shape of this data plotting frequency. That shows the shapes and the sense to check the number of bunched. Different types of statistics used to perform the data in central of distribution. These statistics referred as the measure of central tendency. mean , median, more used to calculate the data in measure of central tendency.
 
Mean
 
Mean is the one of the most and used the central tendency. It used manage in mathematical form. They used to describe the average of the distribution. It is equal as the SX/N. We have to explain that is the sum of the score distribution. It divided it into the sum of the total number score. In this distribution mean is the balanced point. If we subtract the each value in this distribution. The sum of all these deviations and the result will be zero.
 
Median
 
The score of the median divides the distribution into the fractions. In this score the half of the score in the above and half of the score in below. This data will be form in the numerical order. It is also known as the score. To calculate the median my using the formula (N+1)/2. Here N means sum of the total number or odd number. The result of the formula can be integer. It may refers as the numerical value. It ordered distribution can located in the median.
 
Mode
 
Mode is also one of the most important distribution to calculate the data. It used to calculate the more frequent or common score distribution to define in simple. The value of the mode corresponding to distribute the highest point in X value. Highest frequency can shared by more than the one value. This type of distribution known as the multi model .
Data Warehousing

Data Warehousing

 

In every organisation data warehousing supposed to use in decision making process. It integrated, subject oriented, time-variant and non-volatile collection of data.

The object of the Data Warehousing is to help the people to understand. The high level data of the implementation of the successful DWH project. The experience on this projects. To help the people in business intelligence professional. In both the vendors and the clients.

Components of the Data warehousing

Components of the Data warehousing can be classified into six major parts. They are

Tools

The business intelligence tools selections and the DWH  selection teams. These tools covered in

  • Reporting
  • OLAP
  • ETL (Extraction, Transformation, and Loading)
  • Metadata
  • Database, Hardware

Steps: In this selection the DWH project contains the typical milestones. To production roll-out and beyond from the gathering requirements, query optimisation. DWH observations are also available in the fields.

Business intelligence

In data warehousing is one of the important part of the business intelligence. In this step can explain the relationship. Between the DWH and the business intelligence. And also discuss about the business intelligence.

 Trends

In this section can explain the lists based on trends and the data warehousing field:

  • Industry consolidation
  • Lack of collaboration with data mining efforts
  • How to measure success
  • Quick implementation time
  • Recipes for data warehousing project failure

The DWH Quality Management

The DWH quality management delivers the quality solutions in end to end process. It enable to data profiling and data quality. In the implementing the data warehousing is the important process. In data collection establish the generates mapping. It is keep the check on the storage repository and the metadata. These based on the business rules and the ethics.

There are four primary phases on the data warehousing life cycle. They are

  • Assessment Quality
  •  Design Quality
  • Transformation Quality
  • Monitoring Quality

Trends

The DWH design can begin. After the terms and tools made in personnel selection. In data warehousing life cycle there are many typical steps involved. They are

  • Need Gathering
  • Physical Environment Setup
  • Data Modeling
  • ETL
  • OLAP Cube Design
  • Front End Development
  • Report Development
  • Performance Tuning
  • Query Optimization
  • Quality Assurance
  • Rolling out to Production
  • Production Maintenance
  • Incremental Enhancements

If we have to design DWH then the above steps are very important. The above steps are typical to DWH designing phase. And it has so many different sections.

Take Description:

During the particular DWH designing phase. This section explains what type of typical needs can accomplished.

Time Requirements:

Estimating some time the particular DWH tasks takes place.

Deliverable’ s:

At the end of the typical tasks in DWH one or more documents produced. The particular task results explained in the steps. They are very important to communicate with the consultants. These results are communicating with the clients and the consultants.

Possible Pitfalls:

This is depends on the out of the watch. Some of them clear, and some of them are not real in data warehousing. Or all them are Real.

 

BIG DATA

Big Data

 
Big data is the word that explains the data volume. That is both structured and unstructured data. That indicates the overflow of the data in day to day basics. It is not related the amount of the data importance. In every organisation that what data will be do the matter in the data. Big Data used for develop the strategic business moves to take decisions on that. It is analyse the data observations or vision.
 

Big Data Importance

 
Big data importance has not been spinning the Big Data Impotance around the data what you have. But would you know that data do. If you collect the data from any source you have to be analysing the data. And that the data includes
 
· Cost reduction
 
· Time reduction
 
· To take good decisions
 
· New product generation or the optimizing the data offering
 
We have to correlate the big data and the high powered analytics. You may complete the tasks related to the business issues. They are
 
· Based on the Customer or consumer habits the company provides the coupons at the point of sale.
 
· In real time if you have to detect the causes, failures, and the issues in near.
 
· Portfolios the total risks are recalculating in every minutes.
 
· Before the data has not defect the organisation. Then they have to classifying the data from roots causes of the point.

 

Big data history and current applications

 
Big Data is the term that is different, and it is collect the large information from different ways. It is storing the large amount of information for possibility of the age old. The idea picked up energy in the mid 2000s. When industry expert Doug Laney enunciated. The now-standard meaning of huge information as the three Vs:
 
Volume
 
So many organisations used to gather the data from various data resources. That includes the data in business transactions, social media. And information from machine to machine data and sensor data. It has been the problem in the past storing – new technologies are being the eased to burden.
 
Velocity
 
In big data stream the speed is remarkable. It must be in the timely dealt in correct way. In real time data near to the deal data has driven in the RFID tags, smart metering and the sensor.
 
Variety
 
Data format has been in all different types. They are from in traditional database in numerical data. Structured, unstructured text documents, video, email, stock ticker data and financial transactions also.
 
In SAS, we have to administrate the two extra dimensions also. When it is belonging to big data:
 
Variability
 
If we have to increase the velocity and varieties of the data from using the extra dimensions. Data flows can be in periodic peaks the data flow has been incompatible. In daily uses the different topics or trends are going on in the social media. It is sensational and event triggered peak data. It is loads the data has been manage or challenging the data. Even it unstructured data as more.
 
Complexity
 
Now a days data gathered from different ways. It has many sources. It generates the has been difficult. To link and cleanse, match and transform the data across the system. Even also it is compulsory to connect the data and correlate the relationships. Many and hierarchy’s data linkages or the data is fast to spiral the outcome of the data.
Dimensions and Measures

Dimensions and Measures

 

In Tableau there are four types of pills. In this they related to the three parts of series. The four type of pills . They are discrete dimension, discrete measure, continuous measure, continuous dimension. From these concepts tableau can understand . and also they are important topics to understand the tableau. In general the relational database is also understand by these concepts. This article will be explains and particularly depends on the low emotions. This concept will be explains the numerical dimensions, and the non-numeric measures. If you will be learn more information on the Dimensions and Measures. Then basic properties are also known.
 

Basics of the Dimensions and measure

 
Dimensions can calculated by the quality and the measures. It can calculated by the quantity of the data. Tableau usually changes the data and it will based on the category of the data. The Basics of the Dimensions and Measures are can placed on the view of the page. They are dimension create in header, and measures creates in axes.
 
  • Green fields = continuous (create axes)
  • Blue fields = discrete (create headers)
  • Bold fields = sorted
  • Fields with AGG() or something else aggregated
  • Fields with no () are Discrete (often a dimension, not )
  • ATTR() runs something like “if MIN(var) = MAX(var) then return var”, so it’s often the largest value.
In header the discrete value can added and the view can added by the axes if it is continuous. If the difference is agree. Then you have to be continuous dimensions or discrete measures in the view. Measures calculated in total values.
 
Notes from Webex:
 
Tableau can expose the results and it is the SQL generator. If you have to known the work of dimensions and measures and They have to built in the pipeline.
 
Context Filter:- In data source they can create the temp table in global or local. When it used in the filter the bunch of stuff only. Ex. testing.( It may use only the analysis of admission in readmission )
 
Top N Filter or Conditions
Remaining all filters are went into the WHERE clause.
Below the aggregations applied;
-Aggregated filter fields applied. These returned to the tableau.
-Performing the table calcs
-Table calcs are filtering- this is the final layer.
Reference lines calculated.
Null marks are not displaying or it will be hide
 
Excluding/ Hiding
By using the format pane null marks will hidden
If u click the right click the values in dimensions hidden
Others will eliminated.
 
Level of Detail Shelf
 
The dimensions are not to want to do the set of attributes by the group( mark the many results ). It used tototale the speed processing also.
 
Reference and Average:
 
Reference lines calculated on the results. These results are different from the average calculation on table. The data underlying data by the total functions. These changes known as the reference lines. These known as the AVG() to Total(). They will make the results are different.
 
After table calculations filter the reference lines calculated and applied.
 
Notes from Mar:
 
-For addressing and separation in tableau the dimensions are available.
 
-In table calcs affects the aggregation or dimensions. They resulting the dimensions are more and the aggregates are returns the   less marks. Continuous vs discrete doesn’t change the marks.
 
-If you want to ignore the aggregations by using the table calcs by using the ATTR dimension. Aggregation is not the reason for   separation.
Software Development Life Cycle

Software Development Life Cycle

Software Development Life Cycle is use to develop, design. To test quality of the software in any industry. This is the process of all industries. The SDLC goal is to perform the high quality software produce. Its meets or increase the expectations of the customer or consumer. It will be available on low cost to reach the goal.
· In software development process SDLC is the one of the composition.
 
· It is also known as the Software Development Process.
 
· The plan of the Software Development Life Cycle used to explain the performance of the tasks. In each step to developing the Software Development.
 
· In SDLC ISO/IEC 12207 is the international standard in SDLC process. Its aim is to be the standard on the international. And is the way to maintain or developing the standards of tasks. By the Software Development Life Cycle.
 

What Is SDLC

 
In software organisations used the software Development process to the Software projects. In this project they have to explain the every point. How to maintain, develop, replace, and alter or build up the particular software. The life cycle process used to improve or exceeds the quality of the software. Developing the process in the Organisation. Here we have to know the What is SDLC. And its Stages.
 

Different typical Life cycle stages given below in SDLC:

 
· Planning
 
· Defining
 
· Designing
 
· Building
 
· Testing
 
· Deployment
 
Analysis of Planning 
 
Planning Analysis is the one of the important. And fundamental topic in Software Development Life Cycle. It used to perform or checked by the senior managers team. By the testing the customers inputs in the various departments. They are marketing department, Sales Department, and domain expert departments in any industry. This information is gathering to produce the basic projects. To plan the basic projects to gain the standards. And it used to utility to study and take decisions in the operational, , and technical areas.
 
Defining Requirements
 
Designing Requirements done after completing the Planning and requiring process. In this step we have to define the clear vision of the product requirements. After that we have to get that clear vision. Becoming approval by the consumer or customers and the market analysts. This analysis done by the SRS. It means that Software Specification. This document has the complete information. That is to designing the product or developing the product. When was the project life cycle process done.
 
The Product Architecture Designing
 
In product architecture designed to develop the product. SRS is helps to designing the architecture that the product outcomes will be grant. In SRS the requirements specified and if the one design has specified. The product or more than one design has specified. Then we have to propose the document. That is DDS- Design Document Specification.
 
Building or Developing the Product
 
In SDLC this stage has viewed as the actual development. It will be starts with the product build performance. In this stage has been being processed the DDS programming Language code activated. This Design has been performing the detailed expression. Of the code generation, organised manner. It can be complete the designing by without difficulties.
 
Testing the Product
 
In modern SDLC models this stage is the one of the stage in all stages. In every stage of SDLC the testing activities of testing models involved. This stage has been testing the products stages. Here the product faults reported, retested, fixed, and tracked. The product will be gain the quality standard.  
 
Deployment in the market and its maintenance
 
By testing the above stages the product will be deployed. The product will launched in the market. Sometimes product launching related to the business strategy. It is depending on the organisation. Some organisation launched the product in limited segments. It is depending on the real business world.

When you Install SQL Server on a PC, a few databases installed on a PC called Sys databases. SQL Server utilizes Sys databases to store its config. Settings and data. And data about every one of the databases installed in the current SQL Server instance. System databases in SQL server used to track operations. And give a temporary work region to clients for doing databases operations.

system databases in SQL server

List of Databases in SQL Server

Below you can find the List of Databases in SQL Server 2012

  • Master database.
  • Msdb database.
  • Model database.
  • Tempdb database.
  • Resource database.

We can see All DB’s except Resource DB in Object Explorer.

The master Database

The master database contains the data about every database. Installed on the present instance of SQL Server. It additionally contains design and status about the current SQL Server instance. This information saved in Sys tables and can be accessed by DBA’s using system functions and views. When Developer creates a new DB. All corresponding details stored in master DB track the information. Try to avoid changes in master DB, because changes in a master DB entire server should corrupt.

The model Database

The model database gives you a format for making new databases. When new DB created all model DB objects copied in the new DB’s. Any changes are done in model DB changes made in user-created databases on a server.

The MSDB Database

The msdb DB have configuration data about different services. for example, SQL Server Agent, Database Mail, and Service Broker. This DB stores the job scheduling information and alerts in SQL Server Agent service. Try to avoid the changing the information in MSDB, if it requires use SP’s and views of msdb.

The TEMPDB Database

The tempdb DB is saved the temporary tables created by the users. SQL Server uses tempdb DB to save the results of complex queries. All tempdb tables, views, and databases dropped when SQL Server restarted.

The Resource Database

The Resource DB is a read database that stores all the sys objects contained in SQL Server. Sys objects contained in the DB; Yet they are available in the sys.shema in each database. mssqlsystemresource.mdf and mssqlsystemresource.ldf are the physical documents of the Resource database.