Tosca Meaning

 

Nowadays software testing tools are very useful, better and easier to lead our life. Testing tools are being used in management defects and the case of the management tests. These are very famous and popular among the masses. These Tosca Meaning of testing tools concerned about the functionalities. Tosca Meaning or Tosca is one of the important tools in software testing tools. Tosca origin is from Italian.
 
Tosca Training In Hyderabad

Introduction to Tosca

 
In software applications, the suite of testing Tosca Meaning tricentis is an enterprise. It automated testing tool to provide the test cases. In the end to end process in the comprehensive management system. In Introduction to Tosca, testing applications depend on the LinearQ method from the design. If starting stage of the product creating the trident is creation. This consideration is looking into many aspects. The most important technologies which take it in ahead of the technologies. Which includes the peers and the technologies. They are model-based test techniques and risk-based techniques.
 
Techniques of Tosca
 
Model-based and risk-based testing techniques involved in TOSCA.

Model-based test Technique

 
The main feature of the testing Tosca is the improving the leverage over automation tool. This is due to model-based test techniques. This is the model of the AUT(Application Under Test). This implemented instead of the script automation testing. All the technical details about AUT is the logical script test and the test data. There are two merged and saved on the test execution at the time.
 
Risk-based test technique
 
In this technique used to assess the risk to respect to test the cases. On the identifying the right set of risks affected by them. This is the right set of the suggestion. This used to various black box test techniques. They are Decision Box, boundary testing, equivalence partitioning, and combinatorial methodologies. They are linear expansion etc.
 
Key Features
 
1. The test cases depend on the bases of weighted. And prioritized on the importance and the criticality. Even the features are build up by the reporting mechanism. If the requirements are gaining the points. By the depicting and impact of the technical weak points.
 
2. Business dynamic steering is the most important and main aim of the TOSCA commander. It is not input data it also makes the entire test. These test cases are using or creating the features on drag and drop. And after adding the validations. The test cases providing the dynamic aim natures. It provides the business based descriptions. For automated test cases and the manual testing cases.
 
3. Synthetic, dynamic test data and automated business dynamic test case generation steering. This is one of the primary data develop by the TOSCA. GUI and CLI based test cases. These are the executing and unified handling functional and automated testing.
 

Supported platform

 
-Application programs: Seibel, SAP.
 
-Single point application programs: MS Outlook, MS Excel
 
-Web browsers: Firefox, Internet Explorer
 
-Application Development Environment: PowerBuilder
 
-Frameworks and programming languages: Java, Visual Basic, Delphi, .net including WPF
 
-Host Applications: 3270, 5250.
 
-Protocols and hardware: Web Services (SOAP), ODBC (Oracle driver), Flash.
 

In the market current version of the TOSCA is 12.2. The Tosca test suite involves in

 
-Xscan Tosca (Tosca wizard)
 
-Executing Tosca
 
-Commander Tosca
 
-Test Repository
 
Even also the commander of the TOSCA is the core backbone. Which is also set up the test creation, execution, analysis. And management of the test script creation.
Cloud Computing
 Definition: Cloud computing services are generally known as the sharing and accessing information. through internet web spaces using the hard drives, local servers and personal computers. “Cloud Computing ” can referred as the internet web space.

 Cloud Computing

Cloud computing Introduction

 
It provides everything in the internet services. It can operate and allow the service in access and manage the internet from anywhere. Cloud Computing Introduction explains the Data centres and many data centres. This  involved in the cloud computing technology. This is connecting the every network and its comprises computing software and storage.

History of Cloud Computing

 
Starting stage of the developing technology the client-server is very famous. It is using with combination of the terminal application and the mainframe. During that time the information stored at CPU. This is very expensive. It connected with mainframe and the both types of resources. They are arrange them and connected to small client-server. This is famous by storing the huge amount of data. It is revolution the mass storage and storage capacity.

Cloud computing companies

 
Cloud computing Companies taken from the shapes and sizes. Large vendors are the having the process of launching one or the offering the space. They are launching the different types of products in many startup companies. The major vendors in cloud computing services given below. There are,
 
Cloud computing company’s services infrastructure:
 
  • AMAZON: It provides and including S3, simpleDB and EC2
  • Rackspace’s: It provides and includes the cloud sites, cloud drive and cloud servers.
  • GoGrid’s: It offers and including cloud storage and cloud Hosting.
  • IBM’s: It provides and including the computing on demand. And having the smart business storage cloud.
  • AT&T: It provides and includes the service in Synaptic storage and the service as a computing.
 
Platform as a Service cloud computing companies
 
  • Google App Engine: It used to developing the Python and Java by using the platform of Google app engine.
  • com: It used to develop the proprietary programming language and Apex. By using the platform of Force.com.
  • Microsoft Azure: It provide the developing the .net by using the Microsoft Azure.
Software as a Service companies
 
  • Google: It is providing the Saas Space. It includes the Gmail, Google docs, Google Calendar and Picasa.
  • IBM: it is offering the web based email services and Lotuslive iNotes. it provides the calendaring capabilities and messaging to business users.
  • Zoho: It is look like as the Microsoft office suite on online products.

Importance of the cloud computing

 
Now a day’s cloud computing is more important than the internet service. In 90s the utilisation of the internet is broad. At the same time present days the cloud computing services are also having the same range. All are believing in that cloud computing.
 
Cloud computing is already a reality:
 
In future it will get high value in internet service. This is essential for everyone in future. It is all comes to continuity for itself. This need for the high energy, physical space and especially the ideology. Have better economy.
 
Arguments in favor of the use of Cloud Computing:
 
There are several argumentative reasons for defending the cloud computing. They are discussing in few points. They are
 
  • Elasticity Demand
  • Cost Savings
  • Speed
advantages and disadvantages:
 
Advantages:
 
  • Cost efficiency
  • High Speed
  • Excellent accessibility
  • Back-up and restore data
Disadvantages
 
  • Security issues
  • Low bandwidth
  • Flexibility issues
  • Incompatibility

recovery models in sql server

 

Recovery models in SQL server that enable you to determine the way SQL Server manage the log files. And ready your DB for recovery after information loss or any other issue. Each of these speaks to an alternate way to deal with adjusting the trade-off. Between protecting disk space and granular disaster recovery options.

  • Simple
  • Full
  • Bulk-logged

Simple RECOVERY MODEL

SQL Server keeps up a minimal measure of data in the transaction log. SQL Server removes the txn log each time the DB achieves a txn checkpoint. leaving no log sections for disaster recovery uses.

By using this model, we able to recover the full or differential backups only. It doesn’t achieve to restore such a database to a given moment. — you can restore it to the correct time when a full or differential backup happened. Hence, you will lose any information changes set aside. A few minutes of the latest full/differential backup and the time of the fail.

FULL RECOVERY MODEL

With this model, SQL Server cares the Txn log until you back it up.
This enables you to draw a disaster recovery plan. That contain both Full & differential backups in conjunction with txn log backups.

In case of a database failure, you have to restore databases using full recovery. Also to protect the data changes saved in txn log files. the full recovery model enables you to restore a DB to a particular moment.

For ex: if an incorrect alteration corrupted your information at 4:36 p.m. on Monday. You could use SQL Server’s point-in-time restore to rollback your database to 4:35 p.m.

Bulk LOGGED RECOVERY MODEL

The bulk-logged recovery model is same as full recovery model. The main difference is it can handle the bulk changes done in databases. The bulk-logged model records these operations in the txn log. Utilizing a method known as minimal logging. Yet keeps you from utilizing the point-in-time restore option.

Microsoft prescribes that the bulk-logged recovery model utilized for brief time-frames. Best practice directs that you change a database to the bulk-logged recovery model. Before leading mass operations and restore it to the full recovery model. When those operations finish.

CHANGING RECOVERY MODELS in SQL Server

  • Open the SQL server
  • Select the database:
    Expand Databases, select user DB or Sys DB.
  • Open the Database Properties:
    Right-tap the database, and after that click on Properties.
  • View the present Recovery Model:
    In the Select, a page sheet, click on Options to see the present Recovery model.
  • Select the new Recovery Model:
    Select either Full, Bulk-logged, or Simple.
  • click OK.
Different types of Statistics

 Different types of Statistics

 

In statistics there are two main methods to use to calculate the statistics in data analysis. They are descriptive statistics and inferential statistics. In descriptive statistics which is using the simple indexes to outline the data . like as the measure of central tendency and measure of dispersion. In inferential statistics which used to draw. There are Different types of Statistics we have.  The development of the data subject in random variations.

 

Descriptive Statistics

It used to explain the basics of the data to study. The sample and the measures used to offer to develop the data. The sample and the graphic measures used to analyse the data. The quantitative analysis of the data used to basis the formation. In this Descriptive Statistics to explain the measure of central tendency. And the measure of the speed.
 

Measure of central Tendency

 
In this distribution the general shape of this data plotting frequency. That shows the shapes and the sense to check the number of bunched. Different types of statistics used to perform the data in central of distribution. These statistics referred as the measure of central tendency. mean , median, more used to calculate the data in measure of central tendency.
 
Mean
 
Mean is the one of the most and used the central tendency. It used manage in mathematical form. They used to describe the average of the distribution. It is equal as the SX/N. We have to explain that is the sum of the score distribution. It divided it into the sum of the total number score. In this distribution mean is the balanced point. If we subtract the each value in this distribution. The sum of all these deviations and the result will be zero.
 
Median
 
The score of the median divides the distribution into the fractions. In this score the half of the score in the above and half of the score in below. This data will be form in the numerical order. It is also known as the score. To calculate the median my using the formula (N+1)/2. Here N means sum of the total number or odd number. The result of the formula can be integer. It may refers as the numerical value. It ordered distribution can located in the median.
 
Mode
 
Mode is also one of the most important distribution to calculate the data. It used to calculate the more frequent or common score distribution to define in simple. The value of the mode corresponding to distribute the highest point in X value. Highest frequency can shared by more than the one value. This type of distribution known as the multi model .
Data Warehousing

Data Warehousing

 

In every organisation data warehousing supposed to use in decision making process. It integrated, subject oriented, time-variant and non-volatile collection of data.

The object of the Data Warehousing is to help the people to understand. The high level data of the implementation of the successful DWH project. The experience on this projects. To help the people in business intelligence professional. In both the vendors and the clients.

Components of the Data warehousing

Components of the Data warehousing can be classified into six major parts. They are

Tools

The business intelligence tools selections and the DWH  selection teams. These tools covered in

  • Reporting
  • OLAP
  • ETL (Extraction, Transformation, and Loading)
  • Metadata
  • Database, Hardware

Steps: In this selection the DWH project contains the typical milestones. To production roll-out and beyond from the gathering requirements, query optimisation. DWH observations are also available in the fields.

Business intelligence

In data warehousing is one of the important part of the business intelligence. In this step can explain the relationship. Between the DWH and the business intelligence. And also discuss about the business intelligence.

 Trends

In this section can explain the lists based on trends and the data warehousing field:

  • Industry consolidation
  • Lack of collaboration with data mining efforts
  • How to measure success
  • Quick implementation time
  • Recipes for data warehousing project failure

The DWH Quality Management

The DWH quality management delivers the quality solutions in end to end process. It enable to data profiling and data quality. In the implementing the data warehousing is the important process. In data collection establish the generates mapping. It is keep the check on the storage repository and the metadata. These based on the business rules and the ethics.

There are four primary phases on the data warehousing life cycle. They are

  • Assessment Quality
  •  Design Quality
  • Transformation Quality
  • Monitoring Quality

Trends

The DWH design can begin. After the terms and tools made in personnel selection. In data warehousing life cycle there are many typical steps involved. They are

  • Need Gathering
  • Physical Environment Setup
  • Data Modeling
  • ETL
  • OLAP Cube Design
  • Front End Development
  • Report Development
  • Performance Tuning
  • Query Optimization
  • Quality Assurance
  • Rolling out to Production
  • Production Maintenance
  • Incremental Enhancements

If we have to design DWH then the above steps are very important. The above steps are typical to DWH designing phase. And it has so many different sections.

Take Description:

During the particular DWH designing phase. This section explains what type of typical needs can accomplished.

Time Requirements:

Estimating some time the particular DWH tasks takes place.

Deliverable’ s:

At the end of the typical tasks in DWH one or more documents produced. The particular task results explained in the steps. They are very important to communicate with the consultants. These results are communicating with the clients and the consultants.

Possible Pitfalls:

This is depends on the out of the watch. Some of them clear, and some of them are not real in data warehousing. Or all them are Real.

 

BIG DATA

Big Data

 
Big data is the word that explains the data volume. That is both structured and unstructured data. That indicates the overflow of the data in day to day basics. It is not related the amount of the data importance. In every organisation that what data will be do the matter in the data. Big Data used for develop the strategic business moves to take decisions on that. It is analyse the data observations or vision.
 

Big Data Importance

 
Big data importance has not been spinning the Big Data Impotance around the data what you have. But would you know that data do. If you collect the data from any source you have to be analysing the data. And that the data includes
 
· Cost reduction
 
· Time reduction
 
· To take good decisions
 
· New product generation or the optimizing the data offering
 
We have to correlate the big data and the high powered analytics. You may complete the tasks related to the business issues. They are
 
· Based on the Customer or consumer habits the company provides the coupons at the point of sale.
 
· In real time if you have to detect the causes, failures, and the issues in near.
 
· Portfolios the total risks are recalculating in every minutes.
 
· Before the data has not defect the organisation. Then they have to classifying the data from roots causes of the point.

 

Big data history and current applications

 
Big Data is the term that is different, and it is collect the large information from different ways. It is storing the large amount of information for possibility of the age old. The idea picked up energy in the mid 2000s. When industry expert Doug Laney enunciated. The now-standard meaning of huge information as the three Vs:
 
Volume
 
So many organisations used to gather the data from various data resources. That includes the data in business transactions, social media. And information from machine to machine data and sensor data. It has been the problem in the past storing – new technologies are being the eased to burden.
 
Velocity
 
In big data stream the speed is remarkable. It must be in the timely dealt in correct way. In real time data near to the deal data has driven in the RFID tags, smart metering and the sensor.
 
Variety
 
Data format has been in all different types. They are from in traditional database in numerical data. Structured, unstructured text documents, video, email, stock ticker data and financial transactions also.
 
In SAS, we have to administrate the two extra dimensions also. When it is belonging to big data:
 
Variability
 
If we have to increase the velocity and varieties of the data from using the extra dimensions. Data flows can be in periodic peaks the data flow has been incompatible. In daily uses the different topics or trends are going on in the social media. It is sensational and event triggered peak data. It is loads the data has been manage or challenging the data. Even it unstructured data as more.
 
Complexity
 
Now a days data gathered from different ways. It has many sources. It generates the has been difficult. To link and cleanse, match and transform the data across the system. Even also it is compulsory to connect the data and correlate the relationships. Many and hierarchy’s data linkages or the data is fast to spiral the outcome of the data.
Dimensions and Measures

Dimensions and Measures

 

In Tableau there are four types of pills. In this they related to the three parts of series. The four type of pills . They are discrete dimension, discrete measure, continuous measure, continuous dimension. From these concepts tableau can understand . and also they are important topics to understand the tableau. In general the relational database is also understand by these concepts. This article will be explains and particularly depends on the low emotions. This concept will be explains the numerical dimensions, and the non-numeric measures. If you will be learn more information on the Dimensions and Measures. Then basic properties are also known.
 

Basics of the Dimensions and measure

 
Dimensions can calculated by the quality and the measures. It can calculated by the quantity of the data. Tableau usually changes the data and it will based on the category of the data. The Basics of the Dimensions and Measures are can placed on the view of the page. They are dimension create in header, and measures creates in axes.
 
  • Green fields = continuous (create axes)
  • Blue fields = discrete (create headers)
  • Bold fields = sorted
  • Fields with AGG() or something else aggregated
  • Fields with no () are Discrete (often a dimension, not )
  • ATTR() runs something like “if MIN(var) = MAX(var) then return var”, so it’s often the largest value.
In header the discrete value can added and the view can added by the axes if it is continuous. If the difference is agree. Then you have to be continuous dimensions or discrete measures in the view. Measures calculated in total values.
 
Notes from Webex:
 
Tableau can expose the results and it is the SQL generator. If you have to known the work of dimensions and measures and They have to built in the pipeline.
 
Context Filter:- In data source they can create the temp table in global or local. When it used in the filter the bunch of stuff only. Ex. testing.( It may use only the analysis of admission in readmission )
 
Top N Filter or Conditions
Remaining all filters are went into the WHERE clause.
Below the aggregations applied;
-Aggregated filter fields applied. These returned to the tableau.
-Performing the table calcs
-Table calcs are filtering- this is the final layer.
Reference lines calculated.
Null marks are not displaying or it will be hide
 
Excluding/ Hiding
By using the format pane null marks will hidden
If u click the right click the values in dimensions hidden
Others will eliminated.
 
Level of Detail Shelf
 
The dimensions are not to want to do the set of attributes by the group( mark the many results ). It used tototale the speed processing also.
 
Reference and Average:
 
Reference lines calculated on the results. These results are different from the average calculation on table. The data underlying data by the total functions. These changes known as the reference lines. These known as the AVG() to Total(). They will make the results are different.
 
After table calculations filter the reference lines calculated and applied.
 
Notes from Mar:
 
-For addressing and separation in tableau the dimensions are available.
 
-In table calcs affects the aggregation or dimensions. They resulting the dimensions are more and the aggregates are returns the   less marks. Continuous vs discrete doesn’t change the marks.
 
-If you want to ignore the aggregations by using the table calcs by using the ATTR dimension. Aggregation is not the reason for   separation.
Software Development Life Cycle

Software Development Life Cycle

Software Development Life Cycle is use to develop, design. To test quality of the software in any industry. This is the process of all industries. The SDLC goal is to perform the high quality software produce. Its meets or increase the expectations of the customer or consumer. It will be available on low cost to reach the goal.
· In software development process SDLC is the one of the composition.
 
· It is also known as the Software Development Process.
 
· The plan of the Software Development Life Cycle used to explain the performance of the tasks. In each step to developing the Software Development.
 
· In SDLC ISO/IEC 12207 is the international standard in SDLC process. Its aim is to be the standard on the international. And is the way to maintain or developing the standards of tasks. By the Software Development Life Cycle.
 

What Is SDLC

 
In software organisations used the software Development process to the Software projects. In this project they have to explain the every point. How to maintain, develop, replace, and alter or build up the particular software. The life cycle process used to improve or exceeds the quality of the software. Developing the process in the Organisation. Here we have to know the What is SDLC. And its Stages.
 

Different typical Life cycle stages given below in SDLC:

 
· Planning
 
· Defining
 
· Designing
 
· Building
 
· Testing
 
· Deployment
 
Analysis of Planning 
 
Planning Analysis is the one of the important. And fundamental topic in Software Development Life Cycle. It used to perform or checked by the senior managers team. By the testing the customers inputs in the various departments. They are marketing department, Sales Department, and domain expert departments in any industry. This information is gathering to produce the basic projects. To plan the basic projects to gain the standards. And it used to utility to study and take decisions in the operational, , and technical areas.
 
Defining Requirements
 
Designing Requirements done after completing the Planning and requiring process. In this step we have to define the clear vision of the product requirements. After that we have to get that clear vision. Becoming approval by the consumer or customers and the market analysts. This analysis done by the SRS. It means that Software Specification. This document has the complete information. That is to designing the product or developing the product. When was the project life cycle process done.
 
The Product Architecture Designing
 
In product architecture designed to develop the product. SRS is helps to designing the architecture that the product outcomes will be grant. In SRS the requirements specified and if the one design has specified. The product or more than one design has specified. Then we have to propose the document. That is DDS- Design Document Specification.
 
Building or Developing the Product
 
In SDLC this stage has viewed as the actual development. It will be starts with the product build performance. In this stage has been being processed the DDS programming Language code activated. This Design has been performing the detailed expression. Of the code generation, organised manner. It can be complete the designing by without difficulties.
 
Testing the Product
 
In modern SDLC models this stage is the one of the stage in all stages. In every stage of SDLC the testing activities of testing models involved. This stage has been testing the products stages. Here the product faults reported, retested, fixed, and tracked. The product will be gain the quality standard.  
 
Deployment in the market and its maintenance
 
By testing the above stages the product will be deployed. The product will launched in the market. Sometimes product launching related to the business strategy. It is depending on the organisation. Some organisation launched the product in limited segments. It is depending on the real business world.

When you Install SQL Server on a PC, a few databases installed on a PC called Sys databases. SQL Server utilizes Sys databases to store its config. Settings and data. And data about every one of the databases installed in the current SQL Server instance. System databases in SQL server used to track operations. And give a temporary work region to clients for doing databases operations.

system databases in SQL server

List of Databases in SQL Server

Below you can find the List of Databases in SQL Server 2012

  • Master database.
  • Msdb database.
  • Model database.
  • Tempdb database.
  • Resource database.

We can see All DB’s except Resource DB in Object Explorer.

The master Database

The master database contains the data about every database. Installed on the present instance of SQL Server. It additionally contains design and status about the current SQL Server instance. This information saved in Sys tables and can be accessed by DBA’s using system functions and views. When Developer creates a new DB. All corresponding details stored in master DB track the information. Try to avoid changes in master DB, because changes in a master DB entire server should corrupt.

The model Database

The model database gives you a format for making new databases. When new DB created all model DB objects copied in the new DB’s. Any changes are done in model DB changes made in user-created databases on a server.

The MSDB Database

The msdb DB have configuration data about different services. for example, SQL Server Agent, Database Mail, and Service Broker. This DB stores the job scheduling information and alerts in SQL Server Agent service. Try to avoid the changing the information in MSDB, if it requires use SP’s and views of msdb.

The TEMPDB Database

The tempdb DB is saved the temporary tables created by the users. SQL Server uses tempdb DB to save the results of complex queries. All tempdb tables, views, and databases dropped when SQL Server restarted.

The Resource Database

The Resource DB is a read database that stores all the sys objects contained in SQL Server. Sys objects contained in the DB; Yet they are available in the sys.shema in each database. mssqlsystemresource.mdf and mssqlsystemresource.ldf are the physical documents of the Resource database.

Data Analysis Expressions

 

Data Analysis Expressions (DAX) if it was the first time to hear the name then you has to scare. But it is not too fool you. These formulas are very easy to understand and learn. Before learning the Data Analysis Expressions you have to know one thing. That is it is not a programming language, it is formula based language. It used for measures and calculated columns by the custom calculations. used to calculate some functions. They are Excel formulas and extra functions. These are building to form the relational data. To check the performance is effective and powerful aggregations.
 
Data Analysis Expressions
 

Understanding DAX Formulas

 
Excel formals and Data Analysis Expressions formulas both are same. To build an equal sign or name by any given functions of expressions or name, and any arguments or values. Such as Data Analysis Expressions or Excel it offers the different type of functions. Understanding DAX Formulas or that type of function. Which they are using in work with creating conditional values. Perform calculations by using the times and dates, and strings.
 

Different type of DAX formulas given by in important ways in below:

 
· If we have to build or calculate the calculations in row-by-row basis. By using DAX formulas, DAX offers the functional language. That is use the row value or a considering value. That is results can be performing the calculations carried by the context.
 
· Dax can offer the different type of functions. Then the results may express as the table of the return results. These results are can offered or provided to the other functions or input.
 
· Dax can allow the results in Time Intelligence Functions. It used in the range of the times and dates. And they compared to the parallel periods to the results.
 

By using the formula bar to create Formulas:

 
· If we have to create the formula bar by using the Power pivot. It is same as the excel to offer the formula bar. If edit the formula bar also it used. And it used to cut the typing and syntax errors, and to auto complete the functionalities.
 
· To enter the name of a table: start with typing the name of the table. Auto Complete formula bar offers the drop down list. And it contains the accurate names. And they are starting with the simple names.
 
· To enter the name of a column: In this we have to type or select the brackets. And after we have to choose the column by the current list of the column table. In the table in the column we have to type or select the first letter or name. After we have to choose the name in the Autocomplete drop-down list.
 

How to Utilise the Autocomplete:

 
· In the nested functions. The Autocomplete formula can used in the middle of the existing functions. Before the text the introduction point has been display. In the value of the drop-down list, and after the text the introduction point has unchanged.
 
· If you are creating for the constants. Then you have to define the name but not display in the Autocomplete drop-down list. But also you have to type the names.
 
· In closing departure or match departure cannot add in the Power Pivot. The each function uses the formula or cannot save. Then also we have to make must and shout it make sure on that function.