One of the basic concepts of quality control, emphasizes making assessments "based on facts and data", as the best way to minimize the possibility of making wrong judgments that lead to taking strategic wrong and detrimental to quality. Making decisions based on factual data represents a scientific approach to administration, whose fundamental science is statistics, which allows us to infer from a few data. If we want to improve a process or attack an existing problem, it is very helpful to analyze data obtained through scientific observation of the process. The control of a process must be extended, such as maintaining and improving it, based on data and its analysis, 

In the process of problem analysis, several quality tools are used to help us identify the real causes that cause it and implement actions to solve it. These tools can be:

q  Report of 8 disciplines. ~ Corrective actions.
q  Cause-Effect Diagrams. ~ Etc.


As noted, evaluating the effectiveness of different approaches to improving productivity, examining the results through studies, needs to make a number of assumptions. A very important assumption is that a given intervention is warranted in the situation in which it is applied: there is no need to reinforce organizational or work characteristics that are already quite satisfactory. However, a common criticism of some organizational development consultants is that they tend to consider that their subspecialties management development, job redesign) provide useful techniques to improve the functioning of all organizations, or worse still, provide solutions for all organizational problems. As Paúl Thayer has commented: “We go around, come or not the case, applying our solution to any organization that allows us, without the careful diagnosis that is necessary… sometimes we return to the original problem and consider how it is that they and especially we came to think that the solution would adequately adjust to the problem ”.

In general, there is agreement that organizations should totally avoid ready-made interventions that offer universal solutions (panaceas) for organizational improvement. As Mayford Roark has said: "beware of the man who knows the answers before he has understood the questions."

Although it is possible for a productivity improvement intervention to appear successful when there is no organizational diagnosis (just as it is possible for someone to provide the correct answer without hearing the question), the probability of successful action is obviously higher if it is based on diagnostic information. "To face reality, one must know what reality is."


In view of the importance of an organizational diagnosis as the basis for an improvement activity, the question arises about the criteria for judging the suitability of a given diagnostic procedure. Three criteria are of special importance.

First, the diagnostic procedure must produce valid data, not contrived or false “results” from particular measurement methods or observers. It is often argued: “Every good manager knows what his people are thinking. That's what we pay you for. " Unfortunately, managers or directors are not always successful in "reading the minds of their people."

A second criterion to gauge the convenience of a diagnostic procedure insofar as it is based on valid theories. The central role of the theory is indicated in Alderfer's definition of organizational diagnosis: “A process based on the theory of the behavioral sciences to publicly enter a human system, collect valid data about human experiences with that system, and feed back that information to the system to promote a greater understanding of the system by its members ”.

A third criterion for evaluating the convenience of an organizational diagnosis is the breadth of what it covers. If a diagnosis does not examine all relevant factors, inferences about performance may be imprecise and action plans inappropriate.


The importance of comprehensive, valid, and theory-based diagnostic training raises the question of who should perform the diagnosis. Three approaches have been taken: trust only in people outside the organization, trust only in members of the organization, and trust in the joint efforts of outsiders and insiders. The weight of opinion clearly favors the third approach, since it combines the benefits of the first two. Outsiders usually lack vested interests to protect and so can be trusted more than insiders; therefore, a greater discovery of relevant information can occur, yielding a more precise diagnosis: “All people have personal interests in organizations. Even if people did not press for their own interests, other members of the system would be unable to accept a consulting relationship with a collaborator, and the internal approach would prove ineffective as a result. In addition, since an external element does not have to worry about living with the members of an organization there is less need to adapt to norms or to be influenced by myths.

However, one of the problematic characteristics of using an external element to carry out a diagnosis is that it can easily be avoided to understand essential aspects of the operation of a system. The external may not be sensitive to shared beliefs, myths, and anxieties, as well as their underlying logic.

However, compared to the well-informed elements within, they can more clearly identify those symbolic, unique aspects of organizational life. Furthermore, the informed internal element can help to indicate some specific political realities (coalitions and conflicts) that are an important part of organizational life. Incorporating information about myths, symbols, and politics into nonrational aspects of organizational behavior - an organizational diagnosis can be more incisive and functional. The internal-external team can use various methods to obtain diagnostic information. Three of the most important methods are observations, interviews and questionnaire surveys.


Observation of people at work can significantly help the diagnostic process in two ways. First, the observed behavior patterns provide a basis for hypothesis formulation and testing of subsequent hypotheses regarding organization. Second, during the feedback process, examples of behavioral observations can contribute to the richness and complexity of the interpretations. Certainly, such examples help capture some of the quirks of a given organization.

However, many organizational behavior advisers are reluctant to use observation as part of the diagnostic process. It has been suggested that some of the young, highly-trained consultants are “so caught up in esoteric graphs, diagrams, and technical jargon that they can no longer look at people as people, but merely as objects with which to fill out multi-page questionnaires appropriate to feed the computer files ”.


Although employees may have many useful ideas about organizational problems and possible remedies, their ideas will not be apparent through observation and cannot be provided through a questionnaire survey. Many people dislike writing. AA Imberman, a management consultant, noted: “In 30 years of consulting experience and face-to-face interviews with nearly 500 operational workers - office and production - I have found that there is a gold mine of information in their hands. comments".

Although the diagnostic information thus obtained may be valid, the interview data are problematic and insufficient when using the sole basis for evaluation and feedback. The data from the interviews (and observations) (1) do not extend beyond the consultant (as hearsay, they are subject to possible distortions or trends), (2) they are relatively qualitative in form, (3) they usually represent only one limited amount of data, (4) are difficult to interpret insofar as they are not normative marks or qualifications (that is, average), (5) they do not capitalize the results, obtained by computer, trusted people have.


Over the past two decades, questionnaire surveys have been increasingly used in large organizations. Although surveys are sometimes used as an instrument for change, their main value lies in their use as a diagnostic tool, indicating the changes that are necessary. All inclusive, 10 potential benefits have been identified as a result of using surveys. These benefits are discussed below:
  1. They identify the problems that exist.
  2. They foresee future problems.
  3. They prioritize problems.
  4. Amplitude.
  5. Basis for evaluation of improvement efforts.
  6. Improve relations with employees.
  7. Improve communication with work groups.
  8. Management training.
  9. They increase the value of non-cash benefits. 10. Appropriate activities are increased.
Of course, there are some potential drawbacks associated with using questionnaire surveys. The five potential problems identified, however, are not entirely unique to questionnaire surveys: they reflect some of the problems associated with any attempt to diagnose and improve organizational effectiveness.
  1. Negative reactions.
  2. Expectations that arise.
  3. Identification of the required actions.
  4. Generation of conflicts.
  5. Tickets and benefits.

There is a wealth of knowledge regarding the development and validation of questionnaire scales, the design and application of surveys, the analysis and interpretation of survey data, and the application of such information for organizational improvement. However, the present analysis is limited to identifying some useful principles in conducting a diagnostic survey.

Generally speaking, the greater the participation of employees in the survey process, the greater the receptivity to change: “Participation reduces opposition and objections… involving individuals or groups in the design of a project or policy ensures your support. Their hand leaves an imprint not only on the design but on themselves. They will see it as a good plan; feasible, after all they recommend it ”. It follows, therefore, that organizations must survey the views of all employees, even though a small number of responses may be sufficient to produce statistically reliable results. Employees must feel that their opinions are valued, and the results must be based on the opinions of each and every one and be believed by those who do not know statistics. Additionally, 100% sampling helps provide feedback on each unit of work.

The integrity of the survey process requires careful attention to ensure the confidentiality of all responses and the anonymity of the respondents, at least as far as the directive is concerned.

When the results are fed back, there is a tendency for some managers to play the game "Guess who said it." Consequently, no detail of a result should be given when the sample comprises less than 15 people; nor should personal opinions be provided when the sample is less than 10 people.
Even a well-meaning manager may be tempted, in such a circumstance, to act revanchist or biased. To the extent that managers are encouraged to take action against people rather than problems, the survey has failed.

The main benefits of a diagnostic survey begin when people data is returned to the organization and troubleshooting begins. Research and experience indicate that significant changes occur only when you have together to work with the data.

As a general rule, organizations should report survey results to employees, no matter how negative the results are. and the results of the surveys should be fully reported to senior management. Dunham and Smith offer the following advice: “The cardinal rule in reporting survey results to senior executives is to summarize the results in such a way that they can be understood, but present them so innocently that they cannot be misinterpreted. This demands technical knowledge and competence and often a great deal of courage! … Doing it differently, innocently, however, is rendering a disservice to the employees, the management and the organization ”.

Presenting results to any group is facilitated by using a conceptual structure. One way to start is to present information about generally favorable conditions of organizational, work and individual characteristics; after that, specific strengths and weaknesses can be pointed out. When survey findings are presented in the form of a collection, they are typically received with indifference, because that information is largely indecipherable to recipients.

Although dissemination of survey results, interpretation and diagnosis of problems, and conducting analysis sessions to solve problems may be necessary conditions for organizational change, they are not in themselves sufficient. The crucial condition is the institutionalization of mechanisms to ensure that action plans are developed and carried out. When there are no formal arrangements that require such activities, managers tend to ignore, if not forget, such activities.

Finally, as previously noted, survey feedback itself has shown that it has only limited and temporary effects. In fact, in a study that only provided feedback but did nothing else, it produced less favorable results that provide no feedback. Consequently, survey feedback alone is a very limited tool for organizational improvement. However, used as a diagnostic device, and together with interventions that directly affect organizational and work characteristics, the combined effects can be substantial and long-lasting.


Coordination of quality activities across an organization requires two aspects:

q  Coordination for control
q  Coordination to create change

Coordination for control is often the central point of a quality department; Coordination to create change often includes “parallel organizations” such as the quality council and projects quality.

New forms of organization are aimed at removing barriers, or walls, between functional departments.

To achieve excellence in quality, top management must guide the effort towards quality.

The tasks of this leadership can be identified.

q  Establish and serve the quality council.
q  Establish quality policies
q  Establish and deploy quality goals.
q  Provide the resources.
q  Provide problem-oriented training.
q  Serve on quality improvement teams-
q  Stimulate improvement.
q  Provide rewards and recognition.

A quality council is a high-level management group that develops the quality strategy and supports its implementation.

Formulate the quality policy-

q  Estimate the general dimensions of the quality problem-
q  Establish an infrastructure that includes quality councils, the projects, the assignment of responsibilities.
q  Plan training at all levels. .Establish supports for teams. Provide coordination.
q  Establish new measures to review progress. .Design a plan to give recognition
q  Establish a publicity plan for quality-related activities

The middle managers execute the quality strategy through different tasks.
  1. Determine quality problems to fix them.
  2. Serve as leaders of various types of quality teams.
  3. Serve as members of quality teams
  4. Serve in the tasks of supporting the quality council when developing the elements of the quality strategy.
  5. Guide quality activities within your own area by demonstrating personal commitment and encouraging your employees.
  6. Identify customers and suppliers and carry out meetings with them, to discover and take into account their needs.
Quality teams create change. Four important types of equipment are:.
  1. Teams of projects quality .
  2. Quality circles
  3. Business process quality teams
  4. Self-administered teams.
The implementation of the quality strategy should occur through the line organization rather than through the personnel of the personnel department.

The quality manager of the future will have two roles:
  1. Quality department control
  2. Assist senior managers in strategic quality management.

A Quality Information System (SIC) is an organized method for collecting, storing and reporting quality information to help decision makers at all levels.

In the past, quality information was primarily about plant inspection data. However, the products are now more complex, the programs for control the quality currently cover, the spectrum of functional departments and the focus is on fitness for use to conform to specifications. These changing conditions, which go hand in hand with the advent of the computer, have resulted in a broader view of Quality information. Information service industries similar changes in the information environment.

The input for a Quality Information System includes:
  • Market Research Information on Quality: These are customer opinions of the product or service provided and the results of customer experience that suggest opportunities to improve fitness for use.
  • Product Design Test Data: This is development test data, data on parts and components under consideration by various vendors, and data on the environment in which the product should be located-
  • Information on design evaluation for Quality: These are all design review meetings, predictions of reliability and critical analysis of failure mode and effect.
  • Information on parts and materials purchased: These are all the inspection data, data on the tests carried out by an independent laboratory on an item obtained, information on the investigation and rate of a supplier.
  • Process data: This data covers the manufacturing inspection system at the plant, from the beginning of manufacturing to the end. It also includes process control data and process skill data.
  • Final inspection data: This is the routine data in a final inspection. Field Performance Data: These include the Mean Time Between Failures (TMEF) and other test data from a company and information obtained from customers about warranties and claims.
  • Quality measurement results: These include data from functional activities, product audits, systems, and data on administrative control.
  • The scope of a quality information system can vary from a simple system that covers the data of the inspection in process to a wide system that covers all the information about the global effectiveness of the important products and processes. 


An Administrative Information System (SIA) is a computer-based system that provides information for administrative decision-making in financial, technological, marketing, and human resource activities. SIA tries to provide all the necessary information for administrators through an integrated system. The concept has several characteristics:
  • The input and output of information are planned from the company-wide point of view, rather than using separate departmental systems or handling each request for information on a case-by-case basis.
  • The information that would normally be kept in each department is consolidated to form a database.
  •  There are several uses for the same input data. (This justifies the integrated approach to a database).


It begins with analyzing customer needs, creating a system design specification, and preparing a proposal indicating the costs and time required.

When the administration approves the proposal, the System is developed, approved and implemented. Finally, the system performance review is taken into account.

A system must be tailored to meet the needs of an organization's internal and external customers
  • Plan the system for receiving information in almost any imaginary form. Although most of the information will be received in special forms, the system should make it possible for it to be received and processed through a phone call, letter or other means.
  • Prevent flexibility to meet new data needs. A prime example is the failure reporting form, which should be reviewed periodically because a critical need for an additional item of information to be saved is discovered.
  • Prevent data collection in three stages: a) Real time (Continuous), b) Recent (Minutes to hours) and c) Historical (Long time).
  • Prevent elimination of data collection as it is not useful as well as reports that are no longer needed. This requires a periodic audit of the use of data and reports-
  • Issue reports that are readable, delivered on time, and contain enough useful detail about current issues to facilitate investigations and corrective action.
  • Prepare summaries covering long periods to highlight potentially problematic areas, and show progress on known problems.
  • Maintain a record of information collection, processing and reporting costs.

Software is the collection of computer programs, procedures, and associated documentation necessary for the operation of the information system.


Experience shows that a primary aspect when developing software is the lack of sufficient communication and understanding between the user and who develops it.

The “project management” approach is adopted by to plan y control the stages of development. The general stages are: definition of the software requirements, design of the software system, installation and commissioning of the system and maintenance of the system. Project management and a control software system are almost always used to organize, determine time-critical activities (Critical Path), and monitor the project process.


The following steps are usually required for creating a program:
  1. Study of the current information flow system and the desired outputs for the future: The current system should be thoroughly reviewed before proceeding with the development of a program. Typically, a system flow analysis and data flow diagrams are required.
  2. Development of a programming plan: The programmer develops an approach to the project. This approach may include decisions (with the user) about the input and output media, about what programming language to use, and whether or not to use prepared programs.
  3. Details of the processing operations: The programmer prepares detailed flow charts that describe all the input, processing and output elements of the information. These diagrams are drawn using special programming symbols and become the basis for describing programs.
  4. Program writing: The program consists of a sequence of instructions written in a specific programming language and at the same time complies with the rules established for that language.
  5. Program Check for Errors: This desktop check (which is almost always done by a programmer) and a coding check (done by the programmer and his colleagues) are necessary due to the difficulty of writing even moderately sized programs without making mistakes.
  6. Testing the program on the computer and making the required corrections
  7. Program Documentation: Generated during the development stage and includes flow charts, a list of steps, the final output format, and special instructions for the computer operator.
  8. Program evaluation: It begins with the adequacy, for the user, of the output. The evaluation also includes the degree of documentation and the updating of prepared programs.
  9. Provision of training: New software is a mystery to many users and training must be provided to encourage them to use it and their application to be successful.

For many applications it is virtually impossible to produce a program that is error free. When you have so many lines, there will necessarily be errors and the cost of errors can be very high. Formal programs have been developed to attack this problem. The main elements almost always include:
  1. Design review: Various reviews are carried out. The purpose is to evaluate
    1. The software requirements,
    2. The software design approach and
    3. The detailed design.
  2. Documentation Review: Emphasis is placed on the plans and procedures that will be used to test computer programs. This test plan documentation is a part of the total project documentation.
  3. Software testing validation: This consists of reviewing the test results to evaluate the software. The tests are classified into two types: static and dynamic. Static tests include design review and a dynamic test runs the program on the computer using test scenarios to find flaws and weak points.
  4. Corrective Action System: This is similar to the system for physical products. Includes documentation of all problems and their follow-up to ensure their solution.
  5. Configuration Management: This is the collection of activities to implement design changes.

For those involved in the administration or regulation of large companies, quality information is derived from multiple sources of operational information: laboratory tests, plant tests, field performance data.

The same information when summarized and converted to the appropriate form becomes an important input to the quality dashboard, an information system that enables busy managers to be well informed about quality performance and trends without having than getting too involved in daily operations.

Operational reports

They are designed to help carry out daily operations with special emphasis on achieving improvement.

Executive reports

They are limited on quality, summaries of information, on factory quality and summaries of customer complaints. The information managers need about executive control varies greatly between companies depending on the nature of the product, the degree to which control problems have been resolved. In many companies these summary executive reports are provided by independent auditors such audits help to ensure that the reporting system correctly reflects what is actually going on with regard to quality.

Some organizations use staged indicators of quality performance. 

It may interest you:  Quality in the Food and Beverage company
I am a dreamer and in my dreams I believe that a better world is possible, that no one knows more than anyone, we all learn from everyone. I love gastronomy, numbers, teaching and sharing all the little I know, because by sharing I also learn. "Let's all go together from foundation to success"
Last entries of MBA Yosvanys R Guerra Valverde (see everything)