Probabilistic Methods (Dose Response Panel)

Project ID

4902

Category

Other

Added on

Sept. 24, 2024, 10:51 a.m.

Search the HERO reference database

Query Builder

Search query
Journal Article

Abstract  Quantitative risk assessment often begins with an estimate of the exposure or dose associated with a particular risk level from which exposure levels posing low risk to populations can be extrapolated. For continuous exposures, this value, the benchmark dose, is often defined by a specified increase (or decrease) from the median or mean response at no exposure. This method of calculating the benchmark dose does not take into account the response distribution and, consequently, cannot be interpreted based upon probability statements of the target population. We investigate quantile regression as an alternative to the use of the median or mean regression. By defining the dose-response quantile relationship and an impairment threshold, we specify a benchmark dose as the dose associated with a specified probability that the population will have a response equal to or more extreme than the specified impairment threshold. In addition, in an effort to minimize model uncertainty, we use Bayesian monotonic semiparametric regression to define the exposure-response quantile relationship, which gives the model flexibility to estimate the quantal dose-response function. We describe this methodology and apply it to both epidemiology and toxicology data.

Journal Article

Abstract  In 2014, the National Research Council (NRC) published Review of EPA's Integrated Risk Information System (IRIS) Process that considers methods EPA uses for developing toxicity criteria for non-carcinogens. These criteria are the Reference Dose (RfD) for oral exposure and Reference Concentration (RfC) for inhalation exposure. The NRC Review suggested using Bayesian methods for application of uncertainty factors (UFs) to adjust the point of departure dose or concentration to a level considered to be without adverse effects for the human population. The NRC foresaw Bayesian methods would be potentially useful for combining toxicity data from disparate sources-high throughput assays, animal testing, and observational epidemiology. UFs represent five distinct areas for which both adjustment and consideration of uncertainty may be needed. NRC suggested UFs could be represented as Bayesian prior distributions, illustrated the use of a log-normal distribution to represent the composite UF, and combined this distribution with a log-normal distribution representing uncertainty in the point of departure (POD) to reflect the overall uncertainty. Here, we explore these suggestions and present a refinement of the methodology suggested by NRC that considers each individual UF as a distribution. From an examination of 24 evaluations from EPA's IRIS program, when individual UFs were represented using this approach, the geometric mean fold change in the value of the RfD or RfC increased from 3 to over 30, depending on the number of individual UFs used and the sophistication of the assessment. We present example calculations and recommendations for implementing the refined NRC methodology.

Journal Article

Abstract  Human exposure limits (HELs) for chemicals with a toxicological threshold are traditionally derived using default assessment factors that account for variations in exposure duration, species sensitivity and individual sensitivity. The present paper elaborates a probabilistic approach for human hazard characterization and the derivation of HELs. It extends the framework for evaluating and expressing uncertainty in hazard characterization recently proposed by WHO-IPCS, i.e. by the incorporation of chemical-specific data on human variability in toxicokinetics. The incorporation of human variability in toxicodynamics was based on the variation between adverse outcome pathways (AOPs). Furthermore, sources of interindividual variability and uncertainty are propagated separately throughout the derivation process. The outcome is a two-dimensional human dose distribution that quantifies the population fraction exceeding a pre-selected critical effect level with an estimate of the associated uncertainty. This enables policy makers to set separate standards for the fraction of the population to be protected and the confidence level of the assessment. The main sources of uncertainty in the human dose distribution can be identified in order to plan new research for reducing uncertainty. Additionally, the approach enables quantification of the relative risk for specific subpopulations. The approach is demonstrated for two pharmaceuticals, i.e. the antibiotic ciprofloxacin and the antineoplastic methotrexate. For both substances, the probabilistic HEL is mainly influenced by uncertainty originating from: (1) the point of departure (PoD), (2) extrapolation from sub-acute to chronic toxicity and (3) interspecies extrapolation. However, when assessing the tails of the two-dimensional human dose distributions, i.e. the section relevant for the derivation of human exposure limits, interindividual variability in toxicodynamics also becomes important.

Journal Article

Abstract  The downward revision of the bisphenol A (BPA) Health-based Guidance Value (HBGV) by the European Food Safety Authority (EFSA) has led to disagreements with other regulatory agencies, among them the German Federal Institute for Risk Assessment (BfR). The BfR has recently published an alternative Tolerable Daily Intake (TDI), 1000-times higher than the EFSA HBGV of 0.2 ng/kg/d. While the EFSA value is defined in relation to immunotoxicity, the BfR alternative TDI is based on declines in sperm counts resulting from exposures in adulthood. Earlier, we had used semen quality deteriorations to estimate a BPA Reference Dose (RfD) of 3 ng/kg/d for use in mixture risk assessments of male reproductive health. We derived this estimate from animal studies of gestational BPA exposures which both EFSA and BfR viewed as irrelevant for human hazard characterisations. Here, we identify factors that drive these diverging views. We find that the fragmented, endpoint-oriented study evaluation system used by EFSA and BfR, with its emphasis on data that can support dose-response analyses, has obscured the overall BPA effect pattern relevant to male reproductive effects. This has led to a disregard for the effects of gestational BPA exposures. We also identify problems with the study evaluation schemes used by EFSA and BfR which leads to the omission of entire streams of evidence from consideration. The main driver of the diverging views of EFSA and BfR is the refusal by BfR to accept immunotoxic effects as the basis for establishing an HBGV. We find that switching from immunotoxicity to declines in semen quality as the basis for deriving a BPA TDI by deterministic or probabilistic approaches produces values in the range of 2.4-6.6 ng/kg/d, closer to the present EFSA HBGV of 0.2 ng/kg/d than the BfR TDI of 200 ng/kg/d. The proposed alternative BfR value is the result of value judgements which erred on the side of disregarding evidence that could have supported a lower TDI. The choices made in terms of selecting key studies and methods for dose-response analyses produced a TDI that comes close to doses shown to produce effects on semen quality in animal studies and in human studies of adult BPA exposures.

Journal Article

Abstract  Probabilistic exposure and risk assessment of chemical hazards in the diet have increasingly gained ground in recent years as a pragmatic approach for the approximation of reality. This work presents the outcomes of a project which aimed at applying probabilistic techniques for basic modelling of chronic dietary exposure to food contaminants following EFSA guidance. These techniques, based on Monte Carlo Risk Assessment (MCRA) software and on the programming language R, were employed for the risk assessment of cadmium for Austrian adults, enabling the validation and the critical comparison of the two approaches. Harmonisation and optimisation of procedures, refinement of exposure assessment skills and confidence in the results were the main benefits. Data amount and validity were identified as critical parameters, influencing the precision of the results. Cadmium was selected as a case study due to its toxicological properties, its ubiquitous presence in food and the availability of Austrian occurrence data. Similar exposure and risk estimates were generated through MCRA and R in alternative optimistic and pessimistic exposure scenarios, suggesting low levels of concern, except for vegetarians, whose upper tail exposures are close to the established Tolerable Weekly Intake. However, as occurrence data gaps have been identified as the major element of uncertainty, the estimated exposure and risk levels are characterised as underestimated. Grains and grain-based products, potatoes and leafy vegetables are the main contributors to the intake. The results will contribute to risk management and to a future refinement of the assessment.

DOI
Journal Article

Abstract  The uncertainties on the occurrence, fate and hazard of Contaminants of Emerging Concern (CECs) increasingly challenge drinking water (DW) utilities whether additional measures should be taken to reduce the health risk. This has led to the development and evaluation of risk-based approaches by the scientific community. DW guideline values are commonly derived based on deterministic chemical risk assessment (CRA). Here, we propose a new probabilistic procedure, that is a quantitative chemical risk assessment (QCRA), to assess potential health risk related to the occurrence of CECs in DW. The QCRA includes uncertainties in risk calculation in both exposure and hazard assessments. To quantify the health risk in terms of the benchmark quotient probabilistic distribution, the QCRA estimates the probabilistic distribution of CECs concentration in DW based on their concentration in source water and simulating the breakthrough curves of a granular activated carbon (GAC) treatment process. The model inputs and output uncertainties were evaluated by sensitivity and uncertainty analyses for each step of the risk assessment to identify the most relevant factors affecting risk estimation. Dominant factors resulted to be the concentration of CECs in water sources, GAC isotherm parameters and toxicological data. To stress the potential of this new QCRA approach, several case studies are considered with focus on bisphenol A as an example CEC and various GAC management options. QCRA quantifies the probabilistic risk, providing more insight compared to CRA. QCRA proved to be more effective in supporting the intervention prioritization for treatment optimization to pursue health risk minimization.

DOI
Journal Article

Abstract  An experimental probabilistic approach for health risk assessment was applied for graphene nanoplatelets (GNPs). The hazard assessment indicated a low level of toxicity for the GNPs. The benchmark dose method, based on sub-chronic and chronic inhalation exposure studies, was used to quantify a guidance value (BMC(h)) for occupational inhalation exposure to GNPs, expressed as a lognormal distribution with a geometric mean +/- geometric standard deviation of 0.212 +/- 7.79 mg/m(3) and 9.37 x 10(4) +/- 7.6 particle/cm(3). Exposure scenarios (ES) were defined based on the scientific literature for large-scale production (ES1) and manufacturing (ES2) of GNPs; a third ES, concerning in-lab handling of GNPs (ES3) was based on results of experiments performed for this study. A probability distribution function was then assumed for each ES. The risk magnitude was calculated using a risk characterization ratio (RCR), defined as the ratio of the exposure distributions and the BMC(h) distribution. All three ES resulted in RCR distributions >/=1 (i.e. risk present); however, none of the ES had a statistically significant level of risk at a 95% confidence interval. A sensitivity analysis indicated that approximately 75% of the variation in the RCR distributions was due to uncertainties in the BMC(h) calculation.

DOI
Journal Article

Abstract  A model and data toolbox is presented to assess risks from combined exposure to multiple chemicals using probabilistic methods. The Monte Carlo Risk Assessment (MCRA) toolbox, also known as the EuroMix toolbox, has more than 40 modules addressing all areas of risk assessment, and includes a data repository with data collected in the EuroMix project. This paper gives an introduction to the toolbox and illustrates its use with examples from the EuroMix project. The toolbox can be used for hazard identification, hazard characterisation, exposure assessment and risk characterisation. Examples for hazard identification are selection of substances relevant for a specific adverse outcome based on adverse outcome pathways and QSAR models. Examples for hazard characterisation are calculation of benchmark doses and relative potency factors with uncertainty from dose response data, and use of kinetic models to perform in vitro to in vivo extrapolation. Examples for exposure assessment are assessing cumulative exposure at external or internal level, where the latter option is needed when dietary and non-dietary routes have to be aggregated. Finally, risk characterisation is illustrated by calculation and display of the margin of exposure for single substances and for the cumulation, including uncertainties derived from exposure and hazard characterisation estimates.

DOI
Journal Article

Abstract  The assessment of the safety of nano-biomedical products for patients is an essential prerequisite for their market authorization. However, it is also required to ensure the safety of the workers who may be unintentionally exposed to the nano-biomaterials (NBMs) in these medical applications during their synthesis, formulation into products and end-of-life processing and also of the medical professionals (e.g., nurses, doctors, dentists) using the products for treating patients. There is only a handful of workplace risk assessments focussing on NBMs used in medical applications. Our goal is to contribute to increasing the knowledge in this area by assessing the occupational risks of magnetite (Fe(3)O(4)) nanoparticles coated with PLGA-b-PEG-COOH used as contrast agent in magnetic resonance imaging (MRI) by applying the software-based Decision Support System (DSS) which was developed in the EU H2020 project BIORIMA. The occupational risk assessment was performed according to regulatory requirements and using state-of-the-art models for hazard and exposure assessment, which are part of the DSS. Exposure scenarios for each life cycle stage were developed using data from literature, inputs from partnering industries and results of a questionnaire distributed to healthcare professionals, i.e., physicians, nurses, technicians working with contrast agents for MRI. Exposure concentrations were obtained either from predictive exposure models or monitoring campaigns designed specifically for this study. Derived No-Effect Levels (DNELs) were calculated by means of the APROBA tool starting from in vivo hazard data from literature. The exposure estimates/measurements and the DNELs were used to perform probabilistic risk characterisation for the formulated exposure scenarios, including uncertainty analysis. The obtained results revealed negligible risks for workers along the life cycle of magnetite NBMs used as contrast agent for the diagnosis of tumour cells in all exposure scenarios except in one when risk is considered acceptable after the adoption of specific risk management measures. The study also demonstrated the added value of using the BIORIMA DSS for quantification and communication of occupational risks of nano-biomedical applications and the associated uncertainties.

Journal Article

Abstract  OBJECTIVE: The objective was to augment a burn injury model, BURNSIM, with probabilistic dose-response risk curves. METHODS: To develop the dose-response, we drew on a considerable amount of historical porcine burn injury data collected by U.S. Army Aeromedical Research Laboratory in the 1970s. The experimental parameters of each usable data point served as inputs to BURNSIM to calculate the burn damage integral (i.e., the internal dose) for 4 severities (mild, intermediate, deep second- and third-degree burns). The binary probability response was constructed and logistic regression was applied to generate the respective dose-response. Historic data collected at the University of Rochester in the 1950s were used for validation. RESULTS: Four dose-response curves were generated, ranging from mild to third degree, with tight 95% confidence bands for mild to deep second degree, and slightly wider bands for third degree. Parametric sensitivity analysis revealed that epidermal and whole skin thicknesses, skin temperature, and blood flow rate have a large effect on predicted outcomes. CONCLUSIONS: Addition of dose-response curves provides a critical augmentation to BURNSIM to improve operational risk assessments of burn hazard. Future recommendations for BURNSIM include the use of body location- and gender-specific parameters with coupling to a thermoregulatory model.

DOI
Journal Article

Abstract  Human cell-based population-wide in vitro models have been proposed as a strategy to derive chemical-specific estimates of inter-individual variability; however, the utility of this approach has not yet been tested for cumulative exposures in mixtures. This study aimed to test defined mixtures and their individual components and determine whether adverse effects of the mixtures were likely to be more variable in a population than those of the individual chemicals. The in vitro model comprised 146 human lymphoblastoid cell lines from four diverse subpopulations of European and African descent. Cells were exposed, in concentration-response, to 42 chemicals from diverse classes of environmental pollutants; in addition, eight defined mixtures were prepared from these chemicals using several exposure- or hazard-based scenarios. Points of departure for cytotoxicity were derived using Bayesian concentration-response modeling and population variability was quantified in the form of a toxicodynamic variability factor (TDVF). We found that 28 chemicals and all mixtures exhibited concentration-response cytotoxicity, enabling calculation of the TDVF. The median TDVF across test substances, for both individual chemicals or defined mixtures, ranged from a default assumption (101/2) of toxicodynamic variability in human population to >10. The data also provide a proof of principle for single-variant genome-wide association mapping for toxicity of the chemicals and mixtures, although replication would be necessary due to statistical power limitations with the current sample size. This study demonstrates the feasibility of using a set of human lymphoblastoid cell lines as an in vitro model to quantify the extent of inter-individual variability in hazardous properties of both individual chemicals and mixtures. The data show that population variability of the mixtures is unlikely to exceed that of the most variable component, and that similarity in genome-wide associations among components may be used to accrue additional evidence for grouping of constituents in a mixture for cumulative assessments.

DOI
Journal Article

Abstract  Benchmark analysis is a general risk estimation strategy for identifying the benchmark dose (BMD) past which the risk of exhibiting an adverse environmental response exceeds a fixed, target value of benchmark response. Estimation of BMD and of its lower confidence limit (BMDL) is well understood for the case of an adverse response to a single stimulus. In many environmental settings, however, one or more additional, secondary, qualitative factor(s) may collude to affect the adverse outcome, such that the risk changes with differential levels of the secondary factor. Bayesian methods for estimation of the BMD and BMDL have grown in popularity, and a large variety of candidate dose-response models is available for applying these methods. This article applies Bayesian strategies to a mixed-factor setting with a secondary qualitative factor possessing two levels to derive two-factor Bayesian BMDs and BMDLs. We present reparameterized dose-response models that allow for explicit use of prior information on the target parameter of interest, the BMD. We also enhance our Bayesian estimation technique for BMD analysis by applying Bayesian model averaging to produce the BMDs and BMDLs, overcoming associated questions of model adequacy when multimodel uncertainty is present. An example from environmental carcinogenicity testing illustrates the calculations.

Journal Article

Abstract  PURPOSE: Reducing chemical pressure on human and environmental health is an integral part of the global sustainability agenda. Guidelines for deriving globally applicable, life cycle based indicators are required to consistently quantify toxicity impacts from chemical emissions as well as from chemicals in consumer products. In response, we elaborate the methodological framework and present recommendations for advancing near-field/far-field exposure and toxicity characterization, and for implementing these recommendations in the scientific consensus model USEtox. METHODS: An expert taskforce was convened by the Life Cycle Initiative hosted by UN Environment to expand existing guidance for evaluating human toxicity impacts from exposure to chemical substances. This taskforce evaluated advances since the original release of USEtox. Based on these advances, the taskforce identified two major aspects that required refinement, namely integrating near-field and far-field exposure and improving human dose-response modeling. Dedicated efforts have led to a set of recommendations to address these aspects in an update of USEtox, while ensuring consistency with the boundary conditions for characterizing life cycle toxicity impacts and being aligned with recommendations from agencies that regulate chemical exposure. The proposed framework was finally tested in an illustrative rice production and consumption case study. RESULTS AND DISCUSSION: On the exposure side, a matrix system is proposed and recommended to integrate far-field exposure from environmental emissions with near-field exposure from chemicals in various consumer product types. Consumer exposure is addressed via submodels for each product type to account for product characteristics and exposure settings. Case study results illustrate that product-use related exposure dominates overall life cycle exposure. On the effect side, a probabilistic dose-response approach combined with a decision tree for identifying reliable points of departure is proposed for non-cancer effects, following recent guidance from the World Health Organization. This approach allows for explicitly considering both uncertainty and human variability in effect factors. Factors reflecting disease severity are proposed to distinguish cancer from non-cancer effects, and within the latter discriminate reproductive/developmental and other non-cancer effects. All proposed aspects have been consistently implemented into the original USEtox framework. CONCLUSIONS: The recommended methodological advancements address several key limitations in earlier approaches. Next steps are to test the new characterization framework in additional case studies and to close remaining research gaps. Our framework is applicable for evaluating chemical emissions and product-related exposure in life cycle assessment, chemical alternatives assessment and chemical substitution, consumer exposure and risk screening, and high-throughput chemical prioritization.

Journal Article

Abstract  This study analysed the probabilistic risk to consumers associated with the presence of iAs, Cd, Cr, Hg, Pb, acrylamide (AA) and ochratoxin A (OTA) in instant coffee from Brazil, Colombia, Mexico and Peru. The results found iAs to be the metal with the highest concentrations (3.50 x 10(-2) to 6.00 x 10(-2) mg/kg), closely followed by Pb (1.70 x 10(-2) to 2.70 x 10(-2) mg/kg) and Cr (5.00 x 10(-3) to 1.00 x 10(-2) mg/kg), although these differences were not significant between countries. Cd and Hg were not detected. Focusing on AA, the concentrations ranged from 1.77 x 10(-1) mg/kg (Peru) to 4.77 x 10(-1) mg/kg (Brazil), while OTA ranged from 1.32 x 10(-3) (Peru) to 1.77 x 10(-3) mg/kg (Brazil) with significant differences between countries in both cases. As regards risk, the hazard quotient and hazard index were less than 1, meaning that the consumption of instant coffee represents a low level of concern for non-genotoxic effects. The results of the combination of margin of exposure and probability of exceedance indicated that the non-genotoxic effects of Pb, AA and OTA pose no threat. However, the probability values of suffering cancer from iAs and AA (between 1 x 10(-6) and 1 x 10(-4)) indicated a moderate risk and that management measures should be taken.

DOI
Journal Article

Abstract  In risk assessment, it is often desired to make inferences on the low dose levels at which a specific benchmark risk is attained. Applications of simultaneous hyperbolic confidence bands for low-dose risk estimation with quantal data under different dose-response models (multistage, Abbott-adjusted Weibull, and Abbott-adjusted log-logistic models) have appeared in the literature. The use of simultaneous three-segment bands under the multistage model has also been proposed recently. In this article, we present explicit formulas for constructing asymptotic one-sided simultaneous hyperbolic and three-segment bands for the simple log-logistic regression model. We use the simultaneous construction to estimate upper hyperbolic and three-segment confidence bands on extra risk and to obtain lower limits on the benchmark dose by inverting the upper bands on risk under the Abbott-adjusted log-logistic model. Monte Carlo simulations evaluate the characteristics of the simultaneous limits. An example is given to illustrate the use of the proposed methods and to compare the two types of simultaneous limits at very low dose levels.

DOI
Journal Article

Abstract  In risk assessment, it is often desired to make inferences on the risk at certain low doses or on the dose(s) at which a specific benchmark risk (BMR) is attained. At times, [Formula: see text] dose levels or BMRs are of interest, and some form of multiplicity adjustment is necessary to ensure a valid [Formula: see text] simultaneous inference. Bonferroni correction is often employed in practice for such purposes. Though relative simple to implement, the Bonferroni strategy can suffer from extreme conservatism (Nitcheva et al., 2005; Al-Saidy et al., 2003). Recently, Kerns (2017) proposed the use of simultaneous hyperbolic and three-segment bands to perform multiple inferences in risk assessment under Abbott-adjusted log-logistic model with the dose level constrained to a given interval. In this paper, we present and compare methods for deriving multiplicity-adjusted upper limits on extra risk and lower bounds on the benchmark dose under Abbott-adjusted log-logistic model. Monte Carlo simulations evaluate the characteristics of the simultaneous limits. An example is given to illustrate the use of the methods.

DOI
Journal Article

Abstract  To date, the known models applied in China to risk assessment and cleanup level estimation still have uncertainties. To solve this problem, this study combined the advantages of the traditional model and the probabilistic risk assessment model to create a new model that fits China's exposure scenarios and enhance the accuracy of health risk assessment and cleanup level estimation. The results of applying the traditional model to the health risk assessment and cleanup level estimate in coking plants showed that the selection of point estimates influenced the results, which increased the uncertainty in the outcome of the risk assessment and cleanup level estimates. The risk assessment result of the new model adopted the 95th percentile distribution range to establish a confidence interval to solve the uncertainty of the traditional model. The cancer risk results calculated using the new model were one-fifth to one-third of those calculated using the traditional model. The results showed that using the new model could eliminate the conservativeness of the traditional model. For the cleanup level estimation, the cleanup levels calculated by the new model can control the risk by 95% for the coking plant, but the results calculated by the traditional model can only control the risk by 69-80%. Therefore, the cleanup levels obtained using the traditional model may underestimate the exposure risk of pollutants. The concentration of contaminants in the surface soil was the most sensitive variable in terms of risk outcomes, but the most important parameter for cleanup level estimation was exposure duration. This study highlighted the positive role of the new model in improving the accuracy of risk assessments and cleanup level estimation.

Journal Article

Abstract  Fipronil, a broad-spectrum insecticide, is widely used in agriculture and veterinary practices. Fipronil-induced neurotoxicity and potential adverse effects on humans and aquatic organisms have raised health concerns. Monitoring programs have been implemented globally to assess fipronil residues in food, including fruits, vegetables, and animal products. However, previous exposure assessments have often focused on specific food categories or subsets of items, resulting in limited insights into the overall health risks. Additionally, the large number of non-detect fipronil residues in food has introduced uncertainties in exposure assessment. To address these issues, a probabilistic exposure assessment and dose-response analysis were adopted in this study, considering the sample distribution below the detection limit to better characterize uncertainties and population variability in health risk assessments. The estimated fipronil exposure to the general public ranges from 6.38 x 10(-6) +/- 0.00017 mg/kg/day to 9.83 x 10(-6) +/- 0.00034 mg/kg/day. Only one out of 200,000 simulated individuals had a fipronil dose exceeding the probabilistic reference dose (0.048 mg/kg/day, pRfD), which aims to protect 99% of the population with effects less than 10% extra risk. By incorporating uncertainties in exposure and dose-response data, a more comprehensive understanding of the health risks associated with fipronil exposure in the Taiwanese population has been achieved.

Journal Article

Abstract  Development of toxicology-based criteria such as occupational exposure levels (OELs) are rarely straightforward. This process requires a rigorous review of the literature, searching for patterns in toxicity, biological plausibility, coherence, and dose-response relationships. Despite the direct applicability, human data are rarely used primarily because of imprecise exposure estimates, unknown influence of assumptions, and confounding factors. As a result, high reliance is often placed on laboratory animal data. Often, data from a single study is typically used to represent an entire database to extrapolate an OEL, even for data-rich compounds. Here we present a holistic framework for evaluating epidemiological, controlled in vivo, mechanistic/in vitro, and computational evidence that can be useful in deriving OELs. It begins with describing a documented review process of the literature, followed by sorting of data into either controlled laboratory in vivo, in silico/read-across, mechanistic/in vitro, or epidemiological/field data categories. Studies are then evaluated and qualified based on rigor, risk of bias, and applicability for point of departure development. Other data (eg, in vitro, in silico estimates, read-across data and mechanistic information, and data that failed to meet the former criteria) are used alongside qualified epidemiological exposure estimates to help inform points of departure or human-equivalent concentrations that are based on toxic end points. Bayesian benchmark dose methods are used to estimate points of departure and for estimating uncertainty factors (UFs) to develop preliminary OELs. These are then compared with epidemiological data to support the OEL and the use and magnitude of UFs, when appropriate.

DOI
Journal Article

Abstract  The need to replace the commonly applied fecal indicator conversions ratio (an assumption of 1:10(-5) virus to fecal indicator organism) in Quantitative Microbial Risk Assessment (QMRA) with models based on quantitative data on the virus of interest has gained prominence due to the different physical and environmental factors that might influence the reliability of using indicator organisms in microbial risk assessment. The challenges facing analytical studies on virus enumeration (genome copies or particles) have contributed to the already existing lack of data in QMRA modelling. This study attempts to fit a QMRA model to genome copies of norovirus data. The model estimates the risk of norovirus infection from the intake of vegetables irrigated with wastewater from different sources. The results were compared to the results of a corresponding model using the fecal indicator conversion ratio to estimate the norovirus count. In all scenarios of using different water sources, the application of the fecal indicator conversion ratio underestimated the norovirus disease burden, measured by the Disability Adjusted Life Years (DALYs), when compared to results using the genome copies norovirus data. In some cases the difference was >2 orders of magnitude. All scenarios using genome copies met the 10(-4) DALY per person per year for consumption of vegetables irrigated with wastewater, although these results are considered to be highly conservative risk estimates. The fecal indicator conversion ratio model of stream-water and drain-water sources of wastewater achieved the 10(-6) DALY per person per year threshold, which tends to indicate an underestimation of health risk when compared to using genome copies for estimating the dose.

DOI
Journal Article

Abstract  A probabilistic dietary risk assessment on mycotoxins was conducted using the Monte Carlo Risk Assessment software, with consumption data from the 2008/2009 Brazilian Household Budget Survey for individuals who were at least 10 years old and occurrence data for 646 samples of rice, maize, wheat, and their products, collected in the Federal District and in the state of Rio Grande do Sul, Brazil. Processing factors were estimated and applied to concentration data. Chronic exposure was estimated for fumonisins (free and bound/hidden), deoxynivalenol (DON) (including the acetylated forms) and zearalenone (ZON) (including alfa-zearalenol) and acute exposure was estimated for DON. For the general population, the chronic exposure exceeded the safe exposure levels at the 95P for DON and at the 99P for fumonisins. Additionally, safe level exceedance occurred at the 97.5P for fumonisins and at the 95P for DON for teenagers, as well as at the 99P for fumonisins for women of child-bearing-age. No exceedances were found for chronic exposure to ZON and acute exposure to DON. Maize couscous contributed most of the total fumonisins (91%) and ZON intakes (~40%) and bread to total intake of DON (~30%). Further studies should be conducted with updated Brazilian consumption data, which should include information for individuals aged less than 10 years old.

Journal Article

Abstract  Benchmark dose (BMD) modeling is now the state of the science for determining the point of departure for risk assessment. Key advantages include the fact that the modeling takes account of all of the data for a particular effect from a particular experiment, increased consistency, and better accounting for statistical uncertainties. Despite these strong advantages, disagreements remain as to several specific aspects of the modeling, including differences in the recommendations of the US Environmental Protection Agency (US EPA) and the European Food Safety Authority (EFSA). Differences exist in the choice of the benchmark response (BMR) for continuous data, the use of unrestricted models, and the mathematical models used; these can lead to differences in the final BMDL. It is important to take confidence in the model into account in choosing the BMDL, rather than simply choosing the lowest value. The field is moving in the direction of model averaging, which will avoid many of the challenges of choosing a single best model when the underlying biology does not suggest one, but additional research would be useful into methods of incorporating biological considerations into the weights used in the averaging. Additional research is also needed regarding the interplay between the BMR and the UF to ensure appropriate use for studies supporting a lower BMR than default values, such as for epidemiology data. Addressing these issues will aid in harmonizing methods and moving the field of risk assessment forward.

Journal Article

Abstract  Integration of acquired immunity into microbial risk assessment for illness incidence is of no doubt essential for the study of susceptibility to illness. In this study, a probabilistic model was set up as dose response for infection and a mathematical derivation was carried out by integrating immunity to obtain probability of illness models. Temporary acquire immunity from epidemiology studies which includes six different Norovirus transmission scenarios such as symptomatic individuals infectious, pre- and post-symptomatic infectiousness (low and high), innate genetic resistance, genogroup 2 type 4 and those with no immune boosting by asymptomatic infection were evaluated. Simulated results on illness inflation factor as a function of dose and exposure indicated that high frequency exposures had immense immunity build up even at high dose levels; hence minimized the probability of illness. Using Norovirus transmission dynamics data, results showed, and immunity included models had a reduction of 2-6 logs of magnitude difference in disease burden for both population and individual probable illness incidence. Additionally, the magnitude order of illness for each dose response remained largely the same for all transmission scenarios; symptomatic infectiousness and no immune boosting after asymptomatic infectiousness also remained the same throughout. With integration of epidemiological data on acquired immunity into the risk assessment, more realistic results were achieved signifying an overestimation of probable risk of illness when epidemiological immunity data are not included. This finding supported the call for rigorous integration of temporary acquired immunity in dose-response in all microbial risk assessments.

Journal Article

Abstract  Environmental fluoride exposure has been linked to numerous cases of fluorosis worldwide. Previous studies have indicated that long-term exposure to fluoride can result in intellectual damage among children. However, a comprehensive health risk assessment of fluorosis-induced intellectual damage is still pending. In this research, we utilized the Bayesian Benchmark Dose Analysis System (BBMD) to investigate the dose-response relationship between urinary fluoride (U-F) concentration and Raven scores in adults from Nayong, Guizhou, China. Our research findings indecate a dose-response relationship between the concentration of U-F and intelligence scores in adults. As the benchmark response (BMR) increased, both the benchmark concentration (BMCs) and the lower bound of the credible interval (BMCLs) increased. Specifically, BMCs for the association between U-F and IQ score were determined to be 0.18 mg/L (BMCL(1) = 0.08 mg/L), 0.91 mg/L (BMCL(5) = 0.40 mg/L), 1.83 mg/L (BMCL(10) = 0.83 mg/L) when using BMRs of 1 %, 5 %, and 10 %. These results indicate that U-F can serve as an effective biomarker for monitoring the loss of IQ in population. We propose three interim targets for public policy in preventing interllectual harm from fluoride exposure.

DOI
Journal Article

Abstract  Human Health Risk Assessment (HHRA) is a widely applied method to make decisions about the environmental status of sites affected by toxic substances. Its conclusions are affected by the variability and uncertainty of the input variables in the HHRA model. The aim of this work is to apply an algorithm based on 2D Monte Carlo simulations to integrate the variability and uncertainty of exposure factors, concentration, and bioaccessibility, reported by various information sources, to assess and compare their influence on the risk outcome. The method is applied to a specific case study of exposure of children to arsenic from accidental soil ingestion in a residential setting in the city of Madrid (Spain) by combining information from 12 studies. The consideration of the variability and uncertainty of the exposure parameters in the Baseline Risk Assessment (BRA, deterministic) resulted in a greater reduction in the numerical value of risk estimations than that produced by considering only the bioaccessibility factor. The results of the Probabilistic Risk Assessment (PRA) showed that the risk distribution was more sensitive to the variabilities of the accidental soil intake rate and the total arsenic concentration than to other variables such as bioaccessibility. In this case study, the uncertainty introduced by using the "default" reasonable maximum exposure factors in the HHRA model and the variability of the concentration term produce overestimates of risk that are at least in the range of those produced by omitting the bioaccessibility term. Thus, the inclusion of bioaccessibility is, alone, insufficient to improve the HHRA since the selection of the exposure factors can significantly affect the estimates of risk for the soil ingestion pathway. In other sites or for other contaminants, however, the role of the uncertainties associated with the bioaccesible fraction could be more pronounced. The method applied in this work may be useful in updating exposure factors to reduce uncertainties in HHRAs.

Filter Results