Definition of Validated Questionnaire

Methods: To highlight what is meant by the term “validated questionnaire, we reviewed and discussed prostate-specific and patient-reported outcome assessment tools that have been appropriately validated for use in patients undergoing surgery on localized prostate cancer. For questionnaires where multiple evaluators complete the same tool for each candidate (e.g., a checklist for behaviour or symptoms), the extent to which the evaluators are consistent in their observations in the same group of candidates can be assessed. This consistency is called inter-evaluator reliability or inter-evaluator agreement and can be estimated using kappa statistics. [33] Suppose two clinicians independently assess the same group of patients based on their mobility after surgery (e.g., 0 = needs help from more than 2 people; 1 = needs help from 1 person; 2 = independent), kappa (к) can be calculated as follows: In this review, we provided guidelines for the development, validation and translation of a questionnaire for use in perioperative and analgesic medicine. The development and translation of a questionnaire requires a thorough examination of the questions related to the format of the questionnaire and the meaning and relevance of the points. Once the development or translation phase is complete, it is important to conduct a pilot project to ensure that the elements can be understood and interpreted correctly by the intended respondents. The validation phase is crucial to ensure that the questionnaire is psychometrically sound. While developing and translating a questionnaire is not an easy task, the processes described in this article should enable researchers to obtain efficient and effective questionnaires in target populations. The following section summarizes the guidelines for translating a questionnaire into another language. I remember walking through the halls of my university`s faculty offices years ago and asking for help validating a questionnaire.

I repeatedly asked the professors, “Can you tell me how to validate the questions in my survey?” The answer was usually polite: “I can`t, but if you tried to talk to the doctor this way and this way, maybe he could help you.” Doctor so-and-so couldn`t help either. In fact, no one seemed able to help. (I strongly recommend that PCA and CA be re-run after completing the formal data collection phase [i.e. After using your questionnaire to collect “real” data]. You want to make sure you get the same factor loading patterns.) When you report the results of your study, you can say that you used a questionnaire whose facial validity came from experts. You should also mention that it has been tested on a subset of participants. Report on the results of PCA and HQ analyses. Should you report on the results of pilot testing or official data collection? I think the communication of CPA and CA results on official data is the most useful. When you report cpa results, you can say something like “Questions 4, 6, 7, 8 and 10 are loaded on the same factor that we found to be a personal commitment to the employer.” When you report the CA results, you can say something like: “Cronbach`s alpha for questions that represent a personal commitment to the employer was 0.91, which indicates excellent internal consistency of responses. Pearsons r between the answers of the two questionnaires can be called the stability coefficient. A higher stability coefficient indicates greater test-retest reliability, which is due to the fact that the questionnaire measurement error is less likely to be due to changes in people`s responses over time.

In general, the first step in validating a survey is to determine the validity of the face. There are two important steps in this process. First, experts or people who understand your topic should read your questionnaire. You need to assess whether the questions effectively capture the topic being studied. You can pretend to fill out the survey while scribbling notes. Second, you should ask a psychometrician (i.e. an expert in creating questionnaires) to check your survey for common mistakes such as double-run, confusing, and suggestive questions. Suppose a new scale is developed to assess pain in hospitalized patients. To prove the validity of the construction of this new pain scale, we can examine to what extent patient responses on the new scale correlate with existing instruments that also measure pain. This is called convergent validity.

One might expect strong correlations between the new questionnaire and existing measures of the same construction, as they measure the same theoretical construct. An important question to consider when estimating the reliability of retest tests is how much time should elapse between questionnaire administrations. If the time between time 1 and time 2 is too short, individuals may remember their responses in time 1, which may overestimate the reliability of tests and retests. Respondents, especially those recovering from major surgery, may experience fatigue when the repeated test is administered shortly after the first administration, which may underestimate the reliability of the test and repeat the test. On the other hand, if there is a long period of time between administering the questionnaire, people`s responses may change due to other factors (for example, a respondent may take painkillers to treat chronic pain). Unfortunately, there is no single answer. The duration should be long enough to allow memory effects to fade and avoid fatigue, but not long enough to allow for changes that may affect the estimation of test reliability and test repetition. [17] Ideally, validated questionnaires should be used in all studies.

However, in intervention epidemiology, validated questionnaires are very rare. In field epidemiology, epidemiologists often use standard questionnaires that have already been used, for example in foodborne outbreaks. However, the situation is different for each epidemic, study and country. The reuse of standard questionnaires does not necessarily indicate an exhibition of interest. Therefore, you must first create a hypothesis with a trawl questionnaire. This will help you design a questionnaire that is suitable for your study. If respondents are required to complete the questionnaire themselves, the elements must be written in such a way that they can be easily understood by the majority of respondents, usually via grade 6 reading level. [3] If the questionnaire is to be administered to young respondents or respondents with cognitive impairments, the readability of the elements should be reduced. Questionnaires for children should take into account the cognitive stages of young people[4] (e.g. pictorial response options may be more appropriate, e.g.

painful faces to assess pain[5]). Construction validity is the most important concept in evaluating a questionnaire designed to measure a construction that is not directly observable (e.g., pain, quality of recovery). If a questionnaire lacks conceptual validity, it will be difficult to interpret the results of the questionnaire, and no conclusions can be drawn from the questionnaire responses to a behavioural domain. The conceptual validity of a questionnaire can be assessed by estimating its association with other variables (or measures of a construction) with which it should be positively, negatively or not at all correlated. [42] In practice, the questionnaire of interest and existing tools for measuring similar and different constructs are administered to the same groups of people. Correlation matrices are then used to examine the expected association patterns between different measurements of the same construction and those between a questionnaire of a construction and other constructions. It was suggested that the correlation coefficients of 0.1 should be considered small, 0.3 moderate and 0.5 large. [43] One concept related to the validity of content is facial validity. Facial validity refers to the extent to which respondents or laypeople consider the points on the questionnaire to be valid.

Such a judgment is based less on the technical components of the questionnaire elements, but rather on whether the elements appear to measure an important construct for respondents. While this is the weakest way to determine the validity of a questionnaire, facial validity can motivate respondents to answer more honestly. For example, if patients feel that a quality recovery questionnaire assesses how well they are recovering from surgery, they may be more likely to respond in a way that reflects their state of recovery.