Tosca Meaning


Nowadays software testing tools are very useful, better and easier to lead our life. Testing tools are being used in management defects and the case of the management tests. These are very famous and popular among the masses. These Tosca Meaning of testing tools concerned about the functionalities. Tosca Meaning or Tosca is one of the important tools in software testing tools. Tosca origin is from Italian.
Tosca Training In Hyderabad

Introduction to Tosca

In software applications, the suite of testing Tosca Meaning tricentis is an enterprise. It automated testing tool to provide the test cases. In the end to end process in the comprehensive management system. In Introduction to Tosca, testing applications depend on the LinearQ method from the design. If starting stage of the product creating the trident is creation. This consideration is looking into many aspects. The most important technologies which take it in ahead of the technologies. Which includes the peers and the technologies. They are model-based test techniques and risk-based techniques.
Techniques of Tosca
Model-based and risk-based testing techniques involved in TOSCA.

Model-based test Technique

The main feature of the testing Tosca is the improving the leverage over automation tool. This is due to model-based test techniques. This is the model of the AUT(Application Under Test). This implemented instead of the script automation testing. All the technical details about AUT is the logical script test and the test data. There are two merged and saved on the test execution at the time.
Risk-based test technique
In this technique used to assess the risk to respect to test the cases. On the identifying the right set of risks affected by them. This is the right set of the suggestion. This used to various black box test techniques. They are Decision Box, boundary testing, equivalence partitioning, and combinatorial methodologies. They are linear expansion etc.
Key Features
1. The test cases depend on the bases of weighted. And prioritized on the importance and the criticality. Even the features are build up by the reporting mechanism. If the requirements are gaining the points. By the depicting and impact of the technical weak points.
2. Business dynamic steering is the most important and main aim of the TOSCA commander. It is not input data it also makes the entire test. These test cases are using or creating the features on drag and drop. And after adding the validations. The test cases providing the dynamic aim natures. It provides the business based descriptions. For automated test cases and the manual testing cases.
3. Synthetic, dynamic test data and automated business dynamic test case generation steering. This is one of the primary data develop by the TOSCA. GUI and CLI based test cases. These are the executing and unified handling functional and automated testing.

Supported platform

-Application programs: Seibel, SAP.
-Single point application programs: MS Outlook, MS Excel
-Web browsers: Firefox, Internet Explorer
-Application Development Environment: PowerBuilder
-Frameworks and programming languages: Java, Visual Basic, Delphi, .net including WPF
-Host Applications: 3270, 5250.
-Protocols and hardware: Web Services (SOAP), ODBC (Oracle driver), Flash.

In the market current version of the TOSCA is 12.2. The Tosca test suite involves in

-Xscan Tosca (Tosca wizard)
-Executing Tosca
-Commander Tosca
-Test Repository
Even also the commander of the TOSCA is the core backbone. Which is also set up the test creation, execution, analysis. And management of the test script creation.
Cloud Computing
 Definition: Cloud computing services are generally known as the sharing and accessing information. through internet web spaces using the hard drives, local servers and personal computers. “Cloud Computing ” can referred as the internet web space.

 Cloud Computing

Cloud computing Introduction

It provides everything in the internet services. It can operate and allow the service in access and manage the internet from anywhere. Cloud Computing Introduction explains the Data centres and many data centres. This  involved in the cloud computing technology. This is connecting the every network and its comprises computing software and storage.

History of Cloud Computing

Starting stage of the developing technology the client-server is very famous. It is using with combination of the terminal application and the mainframe. During that time the information stored at CPU. This is very expensive. It connected with mainframe and the both types of resources. They are arrange them and connected to small client-server. This is famous by storing the huge amount of data. It is revolution the mass storage and storage capacity.

Cloud computing companies

Cloud computing Companies taken from the shapes and sizes. Large vendors are the having the process of launching one or the offering the space. They are launching the different types of products in many startup companies. The major vendors in cloud computing services given below. There are,
Cloud computing company’s services infrastructure:
  • AMAZON: It provides and including S3, simpleDB and EC2
  • Rackspace’s: It provides and includes the cloud sites, cloud drive and cloud servers.
  • GoGrid’s: It offers and including cloud storage and cloud Hosting.
  • IBM’s: It provides and including the computing on demand. And having the smart business storage cloud.
  • AT&T: It provides and includes the service in Synaptic storage and the service as a computing.
Platform as a Service cloud computing companies
  • Google App Engine: It used to developing the Python and Java by using the platform of Google app engine.
  • com: It used to develop the proprietary programming language and Apex. By using the platform of
  • Microsoft Azure: It provide the developing the .net by using the Microsoft Azure.
Software as a Service companies
  • Google: It is providing the Saas Space. It includes the Gmail, Google docs, Google Calendar and Picasa.
  • IBM: it is offering the web based email services and Lotuslive iNotes. it provides the calendaring capabilities and messaging to business users.
  • Zoho: It is look like as the Microsoft office suite on online products.

Importance of the cloud computing

Now a day’s cloud computing is more important than the internet service. In 90s the utilisation of the internet is broad. At the same time present days the cloud computing services are also having the same range. All are believing in that cloud computing.
Cloud computing is already a reality:
In future it will get high value in internet service. This is essential for everyone in future. It is all comes to continuity for itself. This need for the high energy, physical space and especially the ideology. Have better economy.
Arguments in favor of the use of Cloud Computing:
There are several argumentative reasons for defending the cloud computing. They are discussing in few points. They are
  • Elasticity Demand
  • Cost Savings
  • Speed
advantages and disadvantages:
  • Cost efficiency
  • High Speed
  • Excellent accessibility
  • Back-up and restore data
  • Security issues
  • Low bandwidth
  • Flexibility issues
  • Incompatibility
Principles of Web Designing


Web site designing is more important. It affects on users. The website designing judged by the users of the web site it is not depending on the web site owner. If you have to design the web site more then the user are attract the web site to watch. There are so many factors to design the website. And Principles of Web Designing  is not only depending on the visibility it also depends on the functions to usability. Principles of Web Designing must be easy to use.Now a day’s websites are not designed to perform. It will make the website effective, pleasing, engaging, and easy to use.
 Principles of Web Designing

10 principles of effective website design


Before preparing the web design we have to know about the purpose. Web design always considers the user’s needs. If you are the web visitor. Then you have looking for the information, some type of interaction, entertainment. To transact or to improve your business then the web site design is more important.


If you have to know some information then many of the users communicate with the website. So it will be design easy to read, understand and digest. So your web site will includes some applications. Such as information of the organisation should be on headlines and subhead lines. And using bullets to the sub points instead of long and wide sentences. Cutting the flip-flop.


the font size is also considered to read. Sans Serif fonts are Arial and Veranda are easy to read. In online the ideal font size is 16px. It sticks to most of 3 typefaces in max and the 3 point size to design the streamlined.
Colors are affecting the web design. The user’s experience is long way to color palette. Complimentary colors are build harmony and balance. Contrasting colors are using in background it is very easy to read.
Actually one image can says thousands of words. So we have to choose the right image for the web site designing. It will help to increase the brand name and the position also.
Navigations say about the easy way to the people take actions and around on your website. Some functions are important in navigations. It includes the logical page hierarchy, designing clickable buttons, using bread crumbs etc.
Grid based layouts are set as the sections, boxes and columns. That the line up and to balanced free. Which used to lead the web site looking better?
The peoples are scanning the computer that studies the each tracking in an “F” pattern. Most of the users seen the left side and top of the screen. They seen rarely to seen right of the screen. Natural behaviour of the display information is more important.
The size and the scale of the image should optimised to effective web designing. Everybody has each web site in the combination of central CSS or JavaScript files, HTML.
Many users used to seen the website in mobiles. Now a day’s many devices used to browsing with many screen sizes. It is important to consider to mobile friendly. You may re construct the responsive web site.

recovery models in sql server


Recovery models in SQL server that enable you to determine the way SQL Server manage the log files. And ready your DB for recovery after information loss or any other issue. Each of these speaks to an alternate way to deal with adjusting the trade-off. Between protecting disk space and granular disaster recovery options.

  • Simple
  • Full
  • Bulk-logged


SQL Server keeps up a minimal measure of data in the transaction log. SQL Server removes the txn log each time the DB achieves a txn checkpoint. leaving no log sections for disaster recovery uses.

By using this model, we able to recover the full or differential backups only. It doesn’t achieve to restore such a database to a given moment. — you can restore it to the correct time when a full or differential backup happened. Hence, you will lose any information changes set aside. A few minutes of the latest full/differential backup and the time of the fail.


With this model, SQL Server cares the Txn log until you back it up.
This enables you to draw a disaster recovery plan. That contain both Full & differential backups in conjunction with txn log backups.

In case of a database failure, you have to restore databases using full recovery. Also to protect the data changes saved in txn log files. the full recovery model enables you to restore a DB to a particular moment.

For ex: if an incorrect alteration corrupted your information at 4:36 p.m. on Monday. You could use SQL Server’s point-in-time restore to rollback your database to 4:35 p.m.


The bulk-logged recovery model is same as full recovery model. The main difference is it can handle the bulk changes done in databases. The bulk-logged model records these operations in the txn log. Utilizing a method known as minimal logging. Yet keeps you from utilizing the point-in-time restore option.

Microsoft prescribes that the bulk-logged recovery model utilized for brief time-frames. Best practice directs that you change a database to the bulk-logged recovery model. Before leading mass operations and restore it to the full recovery model. When those operations finish.


  • Open the SQL server
  • Select the database:
    Expand Databases, select user DB or Sys DB.
  • Open the Database Properties:
    Right-tap the database, and after that click on Properties.
  • View the present Recovery Model:
    In the Select, a page sheet, click on Options to see the present Recovery model.
  • Select the new Recovery Model:
    Select either Full, Bulk-logged, or Simple.
  • click OK.
Different types of Statistics

 Different types of Statistics


In statistics there are two main methods to use to calculate the statistics in data analysis. They are descriptive statistics and inferential statistics. In descriptive statistics which is using the simple indexes to outline the data . like as the measure of central tendency and measure of dispersion. In inferential statistics which used to draw. There are Different types of Statistics we have.  The development of the data subject in random variations.


Descriptive Statistics

It used to explain the basics of the data to study. The sample and the measures used to offer to develop the data. The sample and the graphic measures used to analyse the data. The quantitative analysis of the data used to basis the formation. In this Descriptive Statistics to explain the measure of central tendency. And the measure of the speed.

Measure of central Tendency

In this distribution the general shape of this data plotting frequency. That shows the shapes and the sense to check the number of bunched. Different types of statistics used to perform the data in central of distribution. These statistics referred as the measure of central tendency. mean , median, more used to calculate the data in measure of central tendency.
Mean is the one of the most and used the central tendency. It used manage in mathematical form. They used to describe the average of the distribution. It is equal as the SX/N. We have to explain that is the sum of the score distribution. It divided it into the sum of the total number score. In this distribution mean is the balanced point. If we subtract the each value in this distribution. The sum of all these deviations and the result will be zero.
The score of the median divides the distribution into the fractions. In this score the half of the score in the above and half of the score in below. This data will be form in the numerical order. It is also known as the score. To calculate the median my using the formula (N+1)/2. Here N means sum of the total number or odd number. The result of the formula can be integer. It may refers as the numerical value. It ordered distribution can located in the median.
Mode is also one of the most important distribution to calculate the data. It used to calculate the more frequent or common score distribution to define in simple. The value of the mode corresponding to distribute the highest point in X value. Highest frequency can shared by more than the one value. This type of distribution known as the multi model .
Data Warehousing

Data Warehousing


In every organisation data warehousing supposed to use in decision making process. It integrated, subject oriented, time-variant and non-volatile collection of data.

The object of the Data Warehousing is to help the people to understand. The high level data of the implementation of the successful DWH project. The experience on this projects. To help the people in business intelligence professional. In both the vendors and the clients.

Components of the Data warehousing

Components of the Data warehousing can be classified into six major parts. They are


The business intelligence tools selections and the DWH  selection teams. These tools covered in

  • Reporting
  • OLAP
  • ETL (Extraction, Transformation, and Loading)
  • Metadata
  • Database, Hardware

Steps: In this selection the DWH project contains the typical milestones. To production roll-out and beyond from the gathering requirements, query optimisation. DWH observations are also available in the fields.

Business intelligence

In data warehousing is one of the important part of the business intelligence. In this step can explain the relationship. Between the DWH and the business intelligence. And also discuss about the business intelligence.


In this section can explain the lists based on trends and the data warehousing field:

  • Industry consolidation
  • Lack of collaboration with data mining efforts
  • How to measure success
  • Quick implementation time
  • Recipes for data warehousing project failure

The DWH Quality Management

The DWH quality management delivers the quality solutions in end to end process. It enable to data profiling and data quality. In the implementing the data warehousing is the important process. In data collection establish the generates mapping. It is keep the check on the storage repository and the metadata. These based on the business rules and the ethics.

There are four primary phases on the data warehousing life cycle. They are

  • Assessment Quality
  •  Design Quality
  • Transformation Quality
  • Monitoring Quality


The DWH design can begin. After the terms and tools made in personnel selection. In data warehousing life cycle there are many typical steps involved. They are

  • Need Gathering
  • Physical Environment Setup
  • Data Modeling
  • ETL
  • OLAP Cube Design
  • Front End Development
  • Report Development
  • Performance Tuning
  • Query Optimization
  • Quality Assurance
  • Rolling out to Production
  • Production Maintenance
  • Incremental Enhancements

If we have to design DWH then the above steps are very important. The above steps are typical to DWH designing phase. And it has so many different sections.

Take Description:

During the particular DWH designing phase. This section explains what type of typical needs can accomplished.

Time Requirements:

Estimating some time the particular DWH tasks takes place.

Deliverable’ s:

At the end of the typical tasks in DWH one or more documents produced. The particular task results explained in the steps. They are very important to communicate with the consultants. These results are communicating with the clients and the consultants.

Possible Pitfalls:

This is depends on the out of the watch. Some of them clear, and some of them are not real in data warehousing. Or all them are Real.



Big Data

Big data is the word that explains the data volume. That is both structured and unstructured data. That indicates the overflow of the data in day to day basics. It is not related the amount of the data importance. In every organisation that what data will be do the matter in the data. Big Data used for develop the strategic business moves to take decisions on that. It is analyse the data observations or vision.

Big Data Importance

Big data importance has not been spinning the Big Data Impotance around the data what you have. But would you know that data do. If you collect the data from any source you have to be analysing the data. And that the data includes
· Cost reduction
· Time reduction
· To take good decisions
· New product generation or the optimizing the data offering
We have to correlate the big data and the high powered analytics. You may complete the tasks related to the business issues. They are
· Based on the Customer or consumer habits the company provides the coupons at the point of sale.
· In real time if you have to detect the causes, failures, and the issues in near.
· Portfolios the total risks are recalculating in every minutes.
· Before the data has not defect the organisation. Then they have to classifying the data from roots causes of the point.


Big data history and current applications

Big Data is the term that is different, and it is collect the large information from different ways. It is storing the large amount of information for possibility of the age old. The idea picked up energy in the mid 2000s. When industry expert Doug Laney enunciated. The now-standard meaning of huge information as the three Vs:
So many organisations used to gather the data from various data resources. That includes the data in business transactions, social media. And information from machine to machine data and sensor data. It has been the problem in the past storing – new technologies are being the eased to burden.
In big data stream the speed is remarkable. It must be in the timely dealt in correct way. In real time data near to the deal data has driven in the RFID tags, smart metering and the sensor.
Data format has been in all different types. They are from in traditional database in numerical data. Structured, unstructured text documents, video, email, stock ticker data and financial transactions also.
In SAS, we have to administrate the two extra dimensions also. When it is belonging to big data:
If we have to increase the velocity and varieties of the data from using the extra dimensions. Data flows can be in periodic peaks the data flow has been incompatible. In daily uses the different topics or trends are going on in the social media. It is sensational and event triggered peak data. It is loads the data has been manage or challenging the data. Even it unstructured data as more.
Now a days data gathered from different ways. It has many sources. It generates the has been difficult. To link and cleanse, match and transform the data across the system. Even also it is compulsory to connect the data and correlate the relationships. Many and hierarchy’s data linkages or the data is fast to spiral the outcome of the data.
Dimensions and Measures

Dimensions and Measures


In Tableau there are four types of pills. In this they related to the three parts of series. The four type of pills . They are discrete dimension, discrete measure, continuous measure, continuous dimension. From these concepts tableau can understand . and also they are important topics to understand the tableau. In general the relational database is also understand by these concepts. This article will be explains and particularly depends on the low emotions. This concept will be explains the numerical dimensions, and the non-numeric measures. If you will be learn more information on the Dimensions and Measures. Then basic properties are also known.

Basics of the Dimensions and measure

Dimensions can calculated by the quality and the measures. It can calculated by the quantity of the data. Tableau usually changes the data and it will based on the category of the data. The Basics of the Dimensions and Measures are can placed on the view of the page. They are dimension create in header, and measures creates in axes.
  • Green fields = continuous (create axes)
  • Blue fields = discrete (create headers)
  • Bold fields = sorted
  • Fields with AGG() or something else aggregated
  • Fields with no () are Discrete (often a dimension, not )
  • ATTR() runs something like “if MIN(var) = MAX(var) then return var”, so it’s often the largest value.
In header the discrete value can added and the view can added by the axes if it is continuous. If the difference is agree. Then you have to be continuous dimensions or discrete measures in the view. Measures calculated in total values.
Notes from Webex:
Tableau can expose the results and it is the SQL generator. If you have to known the work of dimensions and measures and They have to built in the pipeline.
Context Filter:- In data source they can create the temp table in global or local. When it used in the filter the bunch of stuff only. Ex. testing.( It may use only the analysis of admission in readmission )
Top N Filter or Conditions
Remaining all filters are went into the WHERE clause.
Below the aggregations applied;
-Aggregated filter fields applied. These returned to the tableau.
-Performing the table calcs
-Table calcs are filtering- this is the final layer.
Reference lines calculated.
Null marks are not displaying or it will be hide
Excluding/ Hiding
By using the format pane null marks will hidden
If u click the right click the values in dimensions hidden
Others will eliminated.
Level of Detail Shelf
The dimensions are not to want to do the set of attributes by the group( mark the many results ). It used tototale the speed processing also.
Reference and Average:
Reference lines calculated on the results. These results are different from the average calculation on table. The data underlying data by the total functions. These changes known as the reference lines. These known as the AVG() to Total(). They will make the results are different.
After table calculations filter the reference lines calculated and applied.
Notes from Mar:
-For addressing and separation in tableau the dimensions are available.
-In table calcs affects the aggregation or dimensions. They resulting the dimensions are more and the aggregates are returns the   less marks. Continuous vs discrete doesn’t change the marks.
-If you want to ignore the aggregations by using the table calcs by using the ATTR dimension. Aggregation is not the reason for   separation.